url
stringlengths 25
141
| content
stringlengths 2.14k
402k
|
---|---|
https://js.langchain.com/v0.2/docs/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Introduction
On this page
Introduction
============
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
* **Development**: Build your applications using LangChain's open-source [building blocks](/v0.2/docs/how_to/#langchain-expression-language-lcel) and [components](/v0.2/docs/how_to/). Hit the ground running using [third-party integrations](/v0.2/docs/integrations/platforms/).
* **Productionization**: Use [LangSmith](/v0.2/docs/langsmith/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
* **Deployment**: Turn any chain into an API with [LangServe](https://www.langchain.com/langserve).
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview")
Concretely, the framework consists of the following open-source libraries:
* **`@langchain/core`**: Base abstractions and LangChain Expression Language.
* **`@langchain/community`**: Third party integrations.
* Partner packages (e.g. **`@langchain/openai`**, **`@langchain/anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`@langchain/core`**.
* **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
* **[langgraph](https://www.langchain.com/langserveh)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
* **[LangSmith](/v0.2/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
note
These docs focus on the JavaScript LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library.
[Tutorials](/v0.2/docs/tutorials)[](#tutorials "Direct link to tutorials")
---------------------------------------------------------------------------
If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials](/v0.2/docs/tutorials). This is the best place to get started.
These are the best ones to get started with:
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
Explore the full list of tutorials [here](/v0.2/docs/tutorials).
[How-To Guides](/v0.2/docs/how_to/)[](#how-to-guides "Direct link to how-to-guides")
-------------------------------------------------------------------------------------
[Here](/v0.2/docs/how_to/) you'll find short answers to “How do I….?” types of questions. These how-to guides don't cover topics in depth - you'll find that material in the [Tutorials](/v0.2/docs/tutorials) and the [API Reference](https://v02.api.js.langchain.com). However, these guides will help you quickly accomplish common tasks.
[Conceptual Guide](/v0.2/docs/concepts)[](#conceptual-guide "Direct link to conceptual-guide")
-----------------------------------------------------------------------------------------------
Introductions to all the key parts of LangChain you'll need to know! [Here](/v0.2/docs/concepts) you'll find high level explanations of all LangChain concepts.
[API reference](https://v02.api.js.langchain.com)[](#api-reference "Direct link to api-reference")
---------------------------------------------------------------------------------------------------
Head to the reference section for full documentation of all classes and methods in the LangChain Python packages.
Ecosystem[](#ecosystem "Direct link to Ecosystem")
---------------------------------------------------
### [🦜🛠️ LangSmith](/v0.2/docs/langsmith)[](#️-langsmith "Direct link to ️-langsmith")
Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
### [🦜🕸️ LangGraph](/v0.2/docs/langgraph)[](#️-langgraph "Direct link to ️-langgraph")
Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
Additional resources[](#additional-resources "Direct link to Additional resources")
------------------------------------------------------------------------------------
[Security](/v0.2/docs/security)[](#security "Direct link to security")
-----------------------------------------------------------------------
Read up on our [Security](/v0.2/docs/security) best practices to make sure you're developing safely with LangChain.
### [Integrations](/v0.2/docs/integrations/platforms/)[](#integrations "Direct link to integrations")
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.2/docs/integrations/platforms/).
### [Contributing](/v0.2/docs/contributing)[](#contributing "Direct link to contributing")
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Next
Tutorials
](/v0.2/docs/tutorials/)
* [Tutorials](#tutorials)
* [How-To Guides](#how-to-guides)
* [Conceptual Guide](#conceptual-guide)
* [API reference](#api-reference)
* [Ecosystem](#ecosystem)
* [🦜🛠️ LangSmith](#️-langsmith)
* [🦜🕸️ LangGraph](#️-langgraph)
* [Additional resources](#additional-resources)
* [Security](#security)
* [Integrations](#integrations)
* [Contributing](#contributing)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/time_weighted_vectorstore | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a time-weighted retriever
On this page
How to create a time-weighted retriever
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
This guide covers the [`TimeWeightedVectorStoreRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html), which uses a combination of semantic similarity and a time decay.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ^ hours_passed
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh."
let score = (1.0 - this.decayRate) ** hoursPassed + vectorRelevance;
`this.decayRate` is a configurable decimal number between 0 and 1. A lower number means that documents will be "remembered" for longer, while a higher number strongly weights more recently accessed documents.
Note that setting a decay rate of exactly 0 or 1 makes `hoursPassed` irrelevant and makes this retriever equivalent to a standard vector lookup.
It is important to note that due to required metadata, all documents must be added to the backing vector store using the `addDocuments` method on the **retriever**, not the vector store itself.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { TimeWeightedVectorStoreRetriever } from "langchain/retrievers/time_weighted";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const retriever = new TimeWeightedVectorStoreRetriever({ vectorStore, memoryStream: [], searchKwargs: 2,});const documents = [ "My name is John.", "My name is Bob.", "My favourite food is pizza.", "My favourite food is pasta.", "My favourite food is sushi.",].map((pageContent) => ({ pageContent, metadata: {} }));// All documents must be added using this method on the retriever (not the vector store!)// so that the correct access history metadata is populatedawait retriever.addDocuments(documents);const results1 = await retriever.invoke("What is my favourite food?");console.log(results1);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }] */const results2 = await retriever.invoke("What is my favourite food?");console.log(results2);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }] */
#### API Reference:
* [TimeWeightedVectorStoreRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html) from `langchain/retrievers/time_weighted`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to use time as a factor when performing retrieval.
Next, check out the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream
](/v0.2/docs/how_to/streaming)[
Next
How to use a chat model to call tools
](/v0.2/docs/how_to/tool_calling)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/langsmith/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [walkthrough](/v0.2/docs/langsmith/walkthrough)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Ecosystem
* 🦜🛠️ LangSmith
🦜🛠️ LangSmith
===============
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
Check out the [interactive walkthrough](/v0.2/docs/langsmith/walkthrough) to get started.
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow, check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:
* Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).
* Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).
* How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).
* How to fine-tune an LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
* How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Conceptual guide
](/v0.2/docs/concepts)[
Next
🦜🛠️ LangSmith
](/v0.2/docs/langsmith/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/vectorstores | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create and query vector stores
On this page
How to create and query vector stores
=====================================
info
Head to [Integrations](/v0.2/docs/integrations/vectorstores) for documentation on built-in integrations with vectorstore providers.
Prerequisites
This guide assumes familiarity with the following concepts:
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Embeddings](/v0.2/docs/concepts/#embedding-models)
* [Document loaders](/v0.2/docs/concepts#document-loaders)
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
This walkthrough uses a basic, unoptimized implementation called [`MemoryVectorStore`](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. LangChain contains many built-in integrations - see [this section](/v0.2/docs/how_to/vectorstores/#which-one-to-pick) for more, or the [full list of integrations](/v0.2/docs/integrations/vectorstores/).
Creating a new index[](#creating-a-new-index "Direct link to Creating a new index")
------------------------------------------------------------------------------------
Most of the time, you'll need to load and prepare the data you want to search over. Here's an example that loads a recent speech from a file:
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Most of the time, you'll need to split the loaded text as a preparation step. See [this section](/v0.2/docs/concepts/#text-splitters) to learn more about text splitters.
Creating a new index from texts[](#creating-a-new-index-from-texts "Direct link to Creating a new index from texts")
---------------------------------------------------------------------------------------------------------------------
If you have already prepared the data you want to search over, you can initialize a vector store directly from text chunks:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Which one to pick?[](#which-one-to-pick "Direct link to Which one to pick?")
-----------------------------------------------------------------------------
Here's a quick guide to help you pick the right vector store for your use case:
* If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib), [Faiss](/v0.2/docs/integrations/vectorstores/faiss), [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) or [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* If you're looking for something that can run in-memory in browser-like environments, then go for [MemoryVectorStore](/v0.2/docs/integrations/vectorstores/memory) or [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* If you come from Python and you were looking for something similar to FAISS, try [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) or [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* If you're looking for an open-source full-featured vector database that you can run locally in a docker container, then go for [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* If you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for [Zep](/v0.2/docs/integrations/vectorstores/zep)
* If you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate).
* If you're using Supabase already then look at the [Supabase](/v0.2/docs/integrations/vectorstores/supabase) vector store to use the same Postgres database for your embeddings too
* If you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* If you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) vector store.
* If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) vector store.
* If you're in search of a cost-effective vector database that allows run vector search with SQL, look no further than [MyScale](/v0.2/docs/integrations/vectorstores/myscale).
* If you're in search of a vector database that you can load from both the browser and server side, check out [CloseVector](/v0.2/docs/integrations/vectorstores/closevector). It's a vector database that aims to be cross-platform.
* If you're looking for a scalable, open-source columnar database with excellent performance for analytical queries, then consider [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse).
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to load data into a vectorstore.
Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How use a vector store to retrieve data
](/v0.2/docs/how_to/vectorstore_retriever)[
Next
Conceptual guide
](/v0.2/docs/concepts)
* [Creating a new index](#creating-a-new-index)
* [Creating a new index from texts](#creating-a-new-index-from-texts)
* [Which one to pick?](#which-one-to-pick)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/tool_calls_multi_modal | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to call tools with multi-modal data
On this page
How to call tools with multi-modal data
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Tools](/v0.2/docs/concepts/#tools)
Here we demonstrate how to call tools with multi-modal data, such as images.
Some multi-modal models, such as those that can reason over images or audio, support [tool calling](/v0.2/docs/concepts/#tool-calling) features as well.
To call tools using such models, simply bind tools to them in the [usual way](/v0.2/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data).
Below, we demonstrate examples using [OpenAI](/v0.2/docs/integrations/platforms/openai) and [Anthropic](/v0.2/docs/integrations/platforms/anthropic). We will use the same image and tool in all cases. Let’s first select an image, and build a placeholder tool that expects as input the string “sunny”, “cloudy”, or “rainy”. We will ask the models to describe the weather in the image.
import { DynamicStructuredTool } from "@langchain/core/tools";import { z } from "zod";const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg";const weatherTool = new DynamicStructuredTool({ name: "multiply", description: "Describe the weather", schema: z.object({ weather: z.enum(["sunny", "cloudy", "rainy"]), }), func: async ({ weather }) => { console.log(weather); return weather; },});
OpenAI[](#openai "Direct link to OpenAI")
------------------------------------------
For OpenAI, we can feed the image URL directly in a content block of type “image\_url”:
import { HumanMessage } from "@langchain/core/messages";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o",}).bindTools([weatherTool]);const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image", }, { type: "image_url", image_url: { url: imageUrl, }, }, ],});const response = await model.invoke([message]);console.log(response.tool_calls);
[ { name: "multiply", args: { weather: "sunny" }, id: "call_MbIAYS9ESBG1EWNM2sMlinjR" }]
Note that we recover tool calls with parsed arguments in LangChain’s [standard format](/v0.2/docs/how_to/tool_calling) in the model response.
Anthropic[](#anthropic "Direct link to Anthropic")
---------------------------------------------------
For Anthropic, we can format a base64-encoded image into a content block of type “image”, as below:
import * as fs from "node:fs/promises";import { ChatAnthropic } from "@langchain/anthropic";import { HumanMessage } from "@langchain/core/messages";const imageData = await fs.readFile("../../data/sunny_day.jpeg");const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",}).bindTools([weatherTool]);const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ],});const response = await model.invoke([message]);console.log(response.tool_calls);
[ { name: "multiply", args: { weather: "sunny" }, id: "toolu_01KnRZWQkgWYSzL2x28crXFm" }]
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use a chat model to call tools
](/v0.2/docs/how_to/tool_calling)[
Next
How to use LangChain tools
](/v0.2/docs/how_to/tools_builtin)
* [OpenAI](#openai)
* [Anthropic](#anthropic)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/vectorstore_retriever | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How use a vector store to retrieve data
On this page
How use a vector store to retrieve data
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Text splitters](/v0.2/docs/concepts#text-splitters)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
Vector stores can be converted into retrievers using the [`.asRetriever()`](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStore.html#asRetriever) method, which allows you to more easily compose them in chains.
Below, we show a retrieval-augmented generation (RAG) chain that performs question answering over documents using the following steps:
1. Initialize an vector store
2. Create a retriever from that vector store
3. Compose a question answering chain
4. Ask questions!
Each of the steps has multiple sub steps and potential configurations, but we'll go through one common flow. First, install the required dependency:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You can download the `state_of_the_union.txt` file [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/state_of_the_union.txt).
import * as fs from "node:fs";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import type { Document } from "@langchain/core/documents";const formatDocumentsAsString = (documents: Document[]) => { return documents.map((document) => document.pageContent).join("\n\n");};// Initialize the LLM to use to answer the question.const model = new ChatOpenAI({ model: "gpt-4o",});const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Initialize a retriever wrapper around the vector storeconst vectorStoreRetriever = vectorStore.asRetriever();// Create a system & human prompt for the chat modelconst SYSTEM_TEMPLATE = `Use the following pieces of context to answer the question at the end.If you don't know the answer, just say that you don't know, don't try to make up an answer.----------------{context}`;const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], ["human", "{question}"],]);const chain = RunnableSequence.from([ { context: vectorStoreRetriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);const answer = await chain.invoke( "What did the president say about Justice Breyer?");console.log({ answer });/* { answer: 'The president honored Justice Stephen Breyer by recognizing his dedication to serving the country as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. He thanked Justice Breyer for his service.' }*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
Let's walk through what's happening here.
1. We first load a long text and split it into smaller documents using a text splitter. We then load those documents (which also embeds the documents using the passed `OpenAIEmbeddings` instance) into HNSWLib, our vector store, creating our index.
2. Though we can query the vector store directly, we convert the vector store into a retriever to return retrieved documents in the right format for the question answering chain.
3. We initialize a retrieval chain, which we'll call later in step 4.
4. We ask questions!
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to convert a vector store as a retriever.
See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use LangChain tools
](/v0.2/docs/how_to/tools_builtin)[
Next
How to create and query vector stores
](/v0.2/docs/how_to/vectorstores)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/concepts | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Conceptual guide
On this page
Conceptual guide
================
This section contains introductions to key parts of LangChain.
Architecture[](#architecture "Direct link to Architecture")
------------------------------------------------------------
LangChain as a framework consists of several pieces. The below diagram shows how they relate.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview")
### `@langchain/core`[](#langchaincore "Direct link to langchaincore")
This package contains base abstractions of different components and ways to compose them together. The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here. No third party integrations are defined here. The dependencies are kept purposefully very lightweight.
### `@langchain/community`[](#langchaincommunity "Direct link to langchaincommunity")
This package contains third party integrations that are maintained by the LangChain community. Key partner packages are separated out (see below). This contains all integrations for various components (LLMs, vectorstores, retrievers). All dependencies in this package are optional to keep the package as lightweight as possible.
### Partner packages[](#partner-packages "Direct link to Partner packages")
While the long tail of integrations are in `@langchain/community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations.
### `langchain`[](#langchain "Direct link to langchain")
The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture. These are NOT third party integrations. All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.
### [LangGraph](/v0.2/docs/langgraph)[](#langgraph "Direct link to langgraph")
Not currently in this repo, `langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for constructing more contr
### [LangSmith](/v0.2/docs/langsmith)[](#langsmith "Direct link to langsmith")
A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
If you want to work with high level abstractions, you should install the `langchain` package.
* npm
* Yarn
* pnpm
npm i langchain
yarn add langchain
pnpm add langchain
If you want to work with specific integrations, you will need to install them separately. See [here](/v0.2/docs/integrations/platforms/) for a list of integrations and how to install them.
For working with LangSmith, you will need to set up a LangSmith developer account [here](https://smith.langchain.com) and get an API key. After that, you can enable it by setting environment variables:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=ls__...
LangChain Expression Language[](#langchain-expression-language "Direct link to LangChain Expression Language")
---------------------------------------------------------------------------------------------------------------
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**First-class streaming support** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Optimized parallel execution** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency.
**Retries and fallbacks** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every [LangServe](https://www.langchain.com/langserve/) server.
**Input and output schemas** Input and output schemas give every LCEL chain schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
[**Seamless LangSmith tracing**](/v0.2/docs/langsmith) As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, **all** steps are automatically logged to [LangSmith](/v0.2/docs/langsmith/) for maximum observability and debuggability.
[**Seamless LangServe deployment**](https://www.langchain.com/langserve/) Any chain created with LCEL can be easily deployed using [LangServe](https://www.langchain.com/langserve/).
### Interface[](#interface "Direct link to Interface")
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes:
* [`stream`](#stream): stream back chunks of the response
* [`invoke`](#invoke): call the chain on an input
* [`batch`](#batch): call the chain on an array of inputs
The **input type** and **output type** varies by component:
Component
Input Type
Output Type
Prompt
Object
PromptValue
ChatModel
Single string, list of chat messages or a PromptValue
ChatMessage
LLM
Single string, list of chat messages or a PromptValue
String
OutputParser
The output of an LLM or ChatModel
Depends on the parser
Retriever
Single string
List of Documents
Tool
Single string or object, depending on the tool
Depends on the tool
Components[](#components "Direct link to Components")
------------------------------------------------------
LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
### LLMs[](#llms "Direct link to LLMs")
Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are `ChatModels`, see below).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This makes them interchangeable with ChatModels. When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not provide any LLMs, rather we rely on third party integrations.
### Chat models[](#chat-models "Direct link to Chat models")
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are traditionally newer models (older models are generally `LLMs`, see above). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This makes them interchangeable with LLMs (and simpler to use). When a string is passed in as input, it will be converted to a HumanMessage under the hood before being passed to the underlying model.
LangChain does not provide any ChatModels, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
* `model`: the name of the model
ChatModels also accept other parameters that are specific to that integration.
### Function/Tool Calling[](#functiontool-calling "Direct link to Function/Tool Calling")
info
We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message.
Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to [extract output matching some schema](/v0.2/docs/tutorials/extraction/) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result.
A tool call includes a name, arguments object, and an optional identifier. The arguments object is structured `{ argumentName: argumentValue }`.
Many LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools).
There are two main use cases for function/tool calling:
* [How to return structured data from an LLM](/v0.2/docs/how_to/structured_output/)
* [How to use a model to call tools](/v0.2/docs/how_to/tool_calling/)
### Message types[](#message-types "Direct link to Message types")
Some language models take an array of messages as input and return a message. There are a few different types of messages. All messages have a `role`, `content`, and `response_metadata` property.
The `role` describes WHO is saying the message. LangChain has different message classes for different roles.
The `content` property describes the content of the message. This can be a few different things:
* A string (most models deal this type of content)
* A List of objects (this is used for multi-modal input, where the object contains information about that input type and that input location)
#### HumanMessage[](#humanmessage "Direct link to HumanMessage")
This represents a message from the user.
#### AIMessage[](#aimessage "Direct link to AIMessage")
This represents a message from the model. In addition to the `content` property, these messages also have:
**`response_metadata`**
The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored.
**`tool_calls`**
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. They can be accessed from there with the `.tool_calls` property.
This property returns an array of objects. Each object has the following keys:
* `name`: The name of the tool that should be called.
* `args`: The arguments to that tool.
* `id`: The id of that tool call.
#### SystemMessage[](#systemmessage "Direct link to SystemMessage")
This represents a system message, which tells the model how to behave. Not every model provider supports this.
#### FunctionMessage[](#functionmessage "Direct link to FunctionMessage")
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
#### ToolMessage[](#toolmessage "Direct link to ToolMessage")
This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
### Prompt templates[](#prompt-templates "Direct link to Prompt templates")
Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in.
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages.
There are a few different types of prompt templates
#### String PromptTemplates[](#string-prompttemplates "Direct link to String PromptTemplates")
These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows:
import { PromptTemplate } from "@langchain/core/prompts";const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");await promptTemplate.invoke({ topic: "cats" });
#### ChatPromptTemplates[](#chatprompttemplates "Direct link to ChatPromptTemplates")
These prompt templates are used to format an array of messages. These "templates" consist of an array of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows:
import { ChatPromptTemplate } from "@langchain/core/prompts";const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["user", "Tell me a joke about {topic}"],]);await promptTemplate.invoke({ topic: "cats" });
In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
#### MessagesPlaceholder[](#messagesplaceholder "Direct link to MessagesPlaceholder")
This prompt template is responsible for adding an array of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in an array of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { HumanMessage } from "@langchain/core/messages";const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], new MessagesPlaceholder("msgs"),]);promptTemplate.invoke({ msgs: [new HumanMessage({ content: "hi!" })] });
This will produce an array of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). This is useful for letting an array of messages be slotted into a particular spot.
An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is:
const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{msgs}"], // <-- This is the changed part]);
### Example Selectors[](#example-selectors "Direct link to Example Selectors")
One common prompting technique for achieving better performance is to include examples as part of the prompt. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts.
### Output parsers[](#output-parsers "Direct link to Output parsers")
note
The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that [here](/v0.2/docs/concepts/#function-tool-calling).
Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs.
LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information:
**Name**: The name of the output parser
**Supports Streaming**: Whether the output parser supports streaming.
**Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific arguments.
**Output Type**: The output type of the object returned by the parser.
**Description**: Our commentary on this output parser and when to use it.
Name
Supports Streaming
Input Type
Output Type
Description
[JSON](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html)
✅
`string` | `BaseMessage`
`Promise<T>`
Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model.
[XML](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html)
✅
`string` | `BaseMessage`
`Promise<XMLResult>`
Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's).
[CSV](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.CommaSeparatedListOutputParser.html)
✅
`string` | `BaseMessage`
`Array[string]`
Returns an array of comma separated values.
[Structured](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html)
`string` | `BaseMessage`
`Promise<TypeOf<T>>`
Parse structured JSON from an LLM response.
[HTTP](https://v02.api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html)
✅
`string`
`Promise<Uint8Array>`
Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client.
[Bytes](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html)
✅
`string` | `BaseMessage`
`Promise<Uint8Array>`
Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client.
[Datetime](https://v02.api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html)
`string`
`Promise<Date>`
Parses response into a `Date`.
[Regex](https://v02.api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html)
`string`
`Promise<Record<string, string>>`
Parses the given text using the regex pattern and returns a object with the parsed output.
### Chat History[](#chat-history "Direct link to Chat History")
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain. This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database Future interactions will then load those messages and pass them into the chain as part of the input.
### Document[](#document "Direct link to Document")
A Document object in LangChain contains information about some data. It has two attributes:
* `pageContent: string`: The content of this document. Currently is only a string.
* `metadata: Record<string, any>`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
### Document loaders[](#document-loaders "Direct link to Document loaders")
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. An example use case is as follows:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader();// <-- Integration specific parameters hereconst docs = await loader.load();
### Text splitters[](#text-splitters "Direct link to Text splitters")
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that.
At a high level, text splitters work as following:
1. Split the text up into small, semantically meaningful chunks (often sentences).
2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
1. How the text is split
2. How the chunk size is measured
### Embedding models[](#embedding-models "Direct link to Embedding models")
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
### Vectorstores[](#vectorstores "Direct link to Vectorstores")
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
Vectorstores can be converted to the retriever interface by doing:
const vectorstore = new MyVectorStore();const retriever = vectorstore.asRetriever();
### Retrievers[](#retrievers "Direct link to Retrievers")
A retriever is an interface that returns relevant documents given an unstructured query. They are more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Retrievers can be created from vector stores, but are also broad enough to include [Exa search](/v0.2/docs/integrations/retrievers/exa/)(web search) and [Amazon Kendra](/v0.2/docs/integrations/retrievers/kendra-retriever/).
Retrievers accept a string query as input and return an array of Document's as output.
### Advanced Retrieval Types[](#advanced-retrieval-types "Direct link to Advanced Retrieval Types")
LangChain provides several advanced retrieval types. A full list is below, along with the following information:
**Name**: Name of the retrieval algorithm.
**Index Type**: Which index type (if any) this relies on.
**Uses an LLM**: Whether this retrieval method uses an LLM.
**When to Use**: Our commentary on when you should considering using this retrieval method.
**Description**: Description of what this retrieval algorithm is doing.
Name
Index Type
Uses an LLM
When to Use
Description
[Vectorstore](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStoreRetriever.html)
Vectorstore
No
If you are just getting started and looking for something quick and easy.
This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text.
[ParentDocument](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html)
Vectorstore + Document Store
No
If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together.
This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks).
[Multi Vector](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html)
Vectorstore + Document Store
Sometimes during indexing
If you are able to extract information from documents that you think is more relevant to index than the text itself.
This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions.
[Self Query](https://v02.api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html)
Vectorstore
Yes
If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text.
This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filer to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself).
[Contextual Compression](https://v02.api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html)
Any
Sometimes
If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM.
This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM.
[Time-Weighted Vectorstore](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html)
Vectorstore
No
If you have timestamps associated with your documents, and you want to retrieve the most recent ones
This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents)
[Multi-Query Retriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_query.MultiQueryRetriever.html)
Any
Yes
If users are asking questions that are complex and require multiple pieces of distinct information to respond
This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them.
### Tools[](#tools "Direct link to Tools")
Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things:
1. The name of the tool
2. A description of what the tool is
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user
It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and JSON schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input.
Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool.
### Toolkits[](#toolkits "Direct link to Toolkits")
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
All Toolkits expose a `getTools` method which returns an array of tools. You can therefore do:
// Initialize a toolkitconst toolkit = new ExampleTookit(...)// Get list of toolsconst tools = toolkit.getTools()
### Agents[](#agents "Direct link to Agents")
By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
[LangGraph](https://github.com/langchain-ai/langgraphjs) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. Please check out that [documentation](https://langchain-ai.github.io/langgraphjs/) for a more in depth overview of agent concepts.
There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`. AgentExecutor was essentially a runtime for agents. It was a great place to get started, however, it was not flexible enough as you started to have more customized agents. In order to solve that we built LangGraph to be this flexible, highly-controllable runtime.
If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/v0.2/docs/how_to/agent_executor). It is recommended, however, that you start to transition to [LangGraph](https://github.com/langchain-ai/langgraphjs).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create and query vector stores
](/v0.2/docs/how_to/vectorstores)[
Next
🦜🛠️ LangSmith
](/v0.2/docs/langsmith/)
* [Architecture](#architecture)
* [`@langchain/core`](#langchaincore)
* [`@langchain/community`](#langchaincommunity)
* [Partner packages](#partner-packages)
* [`langchain`](#langchain)
* [LangGraph](#langgraph)
* [LangSmith](#langsmith)
* [Installation](#installation)
* [LangChain Expression Language](#langchain-expression-language)
* [Interface](#interface)
* [Components](#components)
* [LLMs](#llms)
* [Chat models](#chat-models)
* [Function/Tool Calling](#functiontool-calling)
* [Message types](#message-types)
* [Prompt templates](#prompt-templates)
* [Example Selectors](#example-selectors)
* [Output parsers](#output-parsers)
* [Chat History](#chat-history)
* [Document](#document)
* [Document loaders](#document-loaders)
* [Text splitters](#text-splitters)
* [Embedding models](#embedding-models)
* [Vectorstores](#vectorstores)
* [Retrievers](#retrievers)
* [Advanced Retrieval Types](#advanced-retrieval-types)
* [Tools](#tools)
* [Toolkits](#toolkits)
* [Agents](#agents)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/tools_builtin | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use LangChain tools
On this page
How to use LangChain tools
==========================
Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things:
1. The name of the tool
2. A description of what the tool is
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user
It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input. For a list of agent types and which ones work with more complicated inputs, please see [this documentation](https://js.langchain.com/v0.1/docs/modules/agents/agent_types/)
Importantly, the name, description, and schema (if used) are all used in the prompt. Therefore, it is vitally important that they are clear and describe exactly how the tool should be used.
Default Tools[](#default-tools "Direct link to Default Tools")
---------------------------------------------------------------
Let’s take a look at how to work with tools. To do this, we’ll work with a built in tool.
import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";const tool = new WikipediaQueryRun({ topKResults: 1, maxDocContentLength: 100,});
This is the default name:
tool.name;
"wikipedia-api"
This is the default description:
tool.description;
"A tool for interacting with and fetching data from the Wikipedia API."
This is the default schema of the inputs. This is a [Zod](https://zod.dev) schema on the tool class. We convert it to JSON schema for display purposes:
import { zodToJsonSchema } from "zod-to-json-schema";zodToJsonSchema(tool.schema);
{ type: "object", properties: { input: { type: "string" } }, additionalProperties: false, "$schema": "http://json-schema.org/draft-07/schema#"}
We can see if the tool should return directly to the user
tool.returnDirect;
false
We can invoke this tool with an object input:
await tool.invoke({ input: "langchain" });
"Page: LangChain\n" + "Summary: LangChain is a framework designed to simplify the creation of applications "
We can also invoke this tool with a single string input. We can do this because this tool expects only a single input. If it required multiple inputs, we would not be able to do that.
await tool.invoke("langchain");
"Page: LangChain\n" + "Summary: LangChain is a framework designed to simplify the creation of applications "
How to use built-in toolkits[](#how-to-use-built-in-toolkits "Direct link to How to use built-in toolkits")
------------------------------------------------------------------------------------------------------------
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
For a complete list of available ready-made toolkits, visit [Integrations](/v0.2/docs/integrations/toolkits/).
All Toolkits expose a `getTools()` method which returns a list of tools.
You’re usually meant to use them this way:
// Initialize a toolkitconst toolkit = new ExampleTookit(...);// Get list of toolsconst tools = toolkit.getTools();
More Topics[](#more-topics "Direct link to More Topics")
---------------------------------------------------------
This was a quick introduction to tools in LangChain, but there is a lot more to learn
**[Built-In Tools](/v0.2/docs/integrations/tools/)**: For a list of all built-in tools, see [this page](/v0.2/docs/integrations/tools/)
**[Custom Tools](/v0.2/docs/how_to/custom_tools)**: Although built-in tools are useful, it’s highly likely that you’ll have to define your own tools. See [this guide](/v0.2/docs/how_to/custom_tools) for instructions on how to do so.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to call tools with multi-modal data
](/v0.2/docs/how_to/tool_calls_multi_modal)[
Next
How use a vector store to retrieve data
](/v0.2/docs/how_to/vectorstore_retriever)
* [Default Tools](#default-tools)
* [How to use built-in toolkits](#how-to-use-built-in-toolkits)
* [More Topics](#more-topics)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/langgraph | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Ecosystem
* 🦜🕸️LangGraph.js
On this page
🦜🕸️LangGraph.js
=================
⚡ Building language agents as graphs ⚡
Overview[](#overview "Direct link to Overview")
------------------------------------------------
LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) [LangChain.js](https://github.com/langchain-ai/langchainjs). It extends the [LangChain Expression Language](/v0.2/docs/how_to/#langchain-expression-language-lcel) with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The current interface exposed is one inspired by [NetworkX](https://networkx.org/documentation/latest/).
The main use is for adding **cycles** to your LLM application. Crucially, LangGraph is NOT optimized for only **DAG** workflows. If you want to build a DAG, you should use just use [LangChain Expression Language](/v0.2/docs/how_to/#langchain-expression-language-lcel).
Cycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.
> Looking for the Python version? Click [here](https://github.com/langchain-ai/langgraph).
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
npm install @langchain/langgraph
Quick start[](#quick-start "Direct link to Quick start")
---------------------------------------------------------
One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in the graph as they execute, and each node updates this internal state with its return value after it executes. The way that the graph updates its internal state is defined by either the type of graph chosen or a custom function.
State in LangGraph can be pretty general, but to keep things simpler to start, we'll show off an example where the graph's state is limited to a list of chat messages using the built-in `MessageGraph` class. This is convenient when using LangGraph with LangChain chat models because we can return chat model output directly.
First, install the LangChain OpenAI integration package:
npm i @langchain/openai
We also need to export some environment variables:
export OPENAI_API_KEY=sk-...
And now we're ready! The graph below contains a single node called `"oracle"` that executes a chat model, then returns the result:
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage, BaseMessage } from "@langchain/core/messages";import { END, MessageGraph } from "@langchain/langgraph";const model = new ChatOpenAI({ temperature: 0 });const graph = new MessageGraph();graph.addNode("oracle", async (state: BaseMessage[]) => { return model.invoke(state);});graph.addEdge("oracle", END);graph.setEntryPoint("oracle");const runnable = graph.compile();
Let's run it!
// For Message graph, input should always be a message or list of messages.const res = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
[ HumanMessage { content: 'What is 1 + 1?', additional_kwargs: {} }, AIMessage { content: '1 + 1 equals 2.', additional_kwargs: { function_call: undefined, tool_calls: undefined } }]
So what did we do here? Let's break it down step by step:
1. First, we initialize our model and a `MessageGraph`.
2. Next, we add a single node to the graph, called `"oracle"`, which simply calls the model with the given input.
3. We add an edge from this `"oracle"` node to the special value `END`. This means that execution will end after current node.
4. We set `"oracle"` as the entrypoint to the graph.
5. We compile the graph, ensuring that no more modifications to it can be made.
Then, when we execute the graph:
1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, `"oracle"`.
2. The `"oracle"` node executes, invoking the chat model.
3. The chat model returns an `AIMessage`. LangGraph adds this to the state.
4. Execution progresses to the special `END` value and outputs the final state.
And as a result, we get a list of two chat messages as output.
### Interaction with LCEL[](#interaction-with-lcel "Direct link to Interaction with LCEL")
As an aside for those already familiar with LangChain - `addNode` actually takes any runnable as input. In the above example, the passed function is automatically converted, but we could also have passed the model directly:
graph.addNode("oracle", model);
In which case the `.invoke()` method will be called when the graph executes.
Just make sure you are mindful of the fact that the input to the runnable is the entire current state. So this will fail:
// This will NOT work with MessageGraph!import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant who always speaks in pirate dialect"], MessagesPlaceholder("messages"),]);const chain = prompt.pipe(model);// State is a list of messages, but our chain expects an object input://// { messages: [] }//// Therefore, the graph will throw an exception when it executes here.graph.addNode("oracle", chain);
Conditional edges[](#conditional-edges "Direct link to Conditional edges")
---------------------------------------------------------------------------
Now, let's move onto something a little bit less trivial. Because math can be difficult for LLMs, let's allow the LLM to conditionally call a calculator node using tool calling.
npm i langchain @langchain/openai
We'll recreate our graph with an additional `"calculator"` that will take the result of the most recent message, if it is a math expression, and calculate the result. We'll also bind the calculator to the OpenAI model as a tool to allow the model to optionally use the tool if it deems necessary:
import { ToolMessage } from "@langchain/core/messages";import { Calculator } from "langchain/tools/calculator";import { convertToOpenAITool } from "@langchain/core/utils/function_calling";const model = new ChatOpenAI({ temperature: 0,}).bind({ tools: [convertToOpenAITool(new Calculator())], tool_choice: "auto",});const graph = new MessageGraph();graph.addNode("oracle", async (state: BaseMessage[]) => { return model.invoke(state);});graph.addNode("calculator", async (state: BaseMessage[]) => { const tool = new Calculator(); const toolCalls = state[state.length - 1].additional_kwargs.tool_calls ?? []; const calculatorCall = toolCalls.find( (toolCall) => toolCall.function.name === "calculator" ); if (calculatorCall === undefined) { throw new Error("No calculator input found."); } const result = await tool.invoke( JSON.parse(calculatorCall.function.arguments) ); return new ToolMessage({ tool_call_id: calculatorCall.id, content: result, });});graph.addEdge("calculator", END);graph.setEntryPoint("oracle");
Now let's think - what do we want to have happen?
* If the `"oracle"` node returns a message expecting a tool call, we want to execute the `"calculator"` node
* If not, we can just end execution
We can achieve this using **conditional edges**, which routes execution to a node based on the current state using a function.
Here's what that looks like:
const router = (state: BaseMessage[]) => { const toolCalls = state[state.length - 1].additional_kwargs.tool_calls ?? []; if (toolCalls.length) { return "calculator"; } else { return "end"; }};graph.addConditionalEdges("oracle", router, { calculator: "calculator", end: END,});
If the model output contains a tool call, we move to the `"calculator"` node. Otherwise, we end.
Great! Now all that's left is to compile the graph and try it out. Math-related questions are routed to the calculator tool:
const runnable = graph.compile();const mathResponse = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
[ HumanMessage { content: 'What is 1 + 1?', additional_kwargs: {} }, AIMessage { content: '', additional_kwargs: { function_call: undefined, tool_calls: [Array] } }, ToolMessage { content: '2', name: undefined, additional_kwargs: {}, tool_call_id: 'call_P7KWQoftVsj6fgsqKyolWp91' }]
While conversational responses are outputted directly:
const otherResponse = await runnable.invoke( new HumanMessage("What is your name?"));
[ HumanMessage { content: 'What is your name?', additional_kwargs: {} }, AIMessage { content: 'My name is Assistant. How can I assist you today?', additional_kwargs: { function_call: undefined, tool_calls: undefined } }]
Cycles[](#cycles "Direct link to Cycles")
------------------------------------------
Now, let's go over a more general example with a cycle. We will recreate the [`AgentExecutor`](/v0.2/docs/how_to/agent_executor/) class from LangChain.
The benefits of creating it with LangGraph is that it is more modifiable.
We will need to install some LangChain packages:
npm install langchain @langchain/core @langchain/community @langchain/openai
We also need additional environment variables.
export OPENAI_API_KEY=sk-...export TAVILY_API_KEY=tvly-...
Optionally, we can set up [LangSmith](https://docs.smith.langchain.com/) for best-in-class observability.
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY=ls__...export LANGCHAIN_ENDPOINT=https://api.langchain.com
### Set up the tools[](#set-up-the-tools "Direct link to Set up the tools")
As above, we will first define the tools we want to use. For this simple example, we will use a built-in search tool via Tavily. However, it is really easy to create your own tools - see documentation [here](/v0.2/docs/how_to/custom_tools) on how to do that.
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const tools = [new TavilySearchResults({ maxResults: 1 })];
We can now wrap these tools in a ToolExecutor, which simply takes in a ToolInvocation and calls that tool, returning the output.
A ToolInvocation is any type with `tool` and `toolInput` attribute.
import { ToolExecutor } from "@langchain/langgraph/prebuilt";const toolExecutor = new ToolExecutor({ tools });
### Set up the model[](#set-up-the-model "Direct link to Set up the model")
Now we need to load the chat model we want to use. This time, we'll use the older function calling interface. This walkthrough will use OpenAI, but we can choose any model that supports OpenAI function calling.
import { ChatOpenAI } from "@langchain/openai";// We will set streaming: true so that we can stream tokens// See the streaming section for more information on this.const model = new ChatOpenAI({ temperature: 0, streaming: true,});
After we've done this, we should make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI function calling, and then bind them to the model class.
import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";const toolsAsOpenAIFunctions = tools.map((tool) => convertToOpenAIFunction(tool));const newModel = model.bind({ functions: toolsAsOpenAIFunctions,});
### Define the agent state[](#define-the-agent-state "Direct link to Define the agent state")
This time, we'll use the more general `StateGraph`. This graph is parameterized by a state object that it passes around to each node. Remember that each node then returns operations to update that state. These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute. Whether to set or add is denoted by annotating the state object you construct the graph with.
For this example, the state we will track will just be a list of messages. We want each node to just add messages to that list. Therefore, we will use an object with one key (`messages`) with the value as an object: `{ value: Function, default?: () => any }`
The `default` key must be a factory that returns the default value for that attribute.
import { BaseMessage } from "@langchain/core/messages";const agentState = { messages: { value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y), default: () => [], },};
You can think of the `MessageGraph` used in the initial example as a preconfigured version of this graph. The difference is that the state is directly a list of messages, instead of an object containing a key called `"messages"` whose value is a list of messages. The `MessageGraph` update step is similar to the one above where we always append the returned values of a node to the internal state.
### Define the nodes[](#define-the-nodes "Direct link to Define the nodes")
We now need to define a few different nodes in our graph. In LangGraph, a node can be either a function or a [runnable](/v0.2/docs/how_to/#langchain-expression-language-lcel). There are two main nodes we need for this:
1. The agent: responsible for deciding what (if any) actions to take.
2. A function to invoke tools: if the agent decides to take an action, this node will then execute that action.
We will also need to define some edges. Some of these edges may be conditional. The reason they are conditional is that based on the output of a node, one of several paths may be taken. The path that is taken is not known until that node is run (the LLM decides).
1. Conditional Edge: after the agent is called, we should either: a. If the agent said to take an action, then the function to invoke tools should be called b. If the agent said that it was finished, then it should finish
2. Normal Edge: after the tools are invoked, it should always go back to the agent to decide what to do next
Let's define the nodes, as well as a function to decide how what conditional edge to take.
import { FunctionMessage } from "@langchain/core/messages";import { AgentAction } from "@langchain/core/agents";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";// Define the function that determines whether to continue or notconst shouldContinue = (state: { messages: Array<BaseMessage> }) => { const { messages } = state; const lastMessage = messages[messages.length - 1]; // If there is no function call, then we finish if ( !("function_call" in lastMessage.additional_kwargs) || !lastMessage.additional_kwargs.function_call ) { return "end"; } // Otherwise if there is, we continue return "continue";};// Define the function to execute toolsconst _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => { const { messages } = state; // Based on the continue condition // we know the last message involves a function call const lastMessage = messages[messages.length - 1]; if (!lastMessage) { throw new Error("No messages found."); } if (!lastMessage.additional_kwargs.function_call) { throw new Error("No function call found in message."); } // We construct an AgentAction from the function_call return { tool: lastMessage.additional_kwargs.function_call.name, toolInput: JSON.parse( lastMessage.additional_kwargs.function_call.arguments ), log: "", };};// Define the function that calls the modelconst callModel = async (state: { messages: Array<BaseMessage> }) => { const { messages } = state; // You can use a prompt here to tweak model behavior. // You can also just pass messages to the model directly. const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant."], new MessagesPlaceholder("messages"), ]); const response = await prompt.pipe(newModel).invoke({ messages }); // We return a list, because this will get added to the existing list return { messages: [response], };};const callTool = async (state: { messages: Array<BaseMessage> }) => { const action = _getAction(state); // We call the tool_executor and get back a response const response = await toolExecutor.invoke(action); // We use the response to create a FunctionMessage const functionMessage = new FunctionMessage({ content: response, name: action.tool, }); // We return a list, because this will get added to the existing list return { messages: [functionMessage] };};
### Define the graph[](#define-the-graph "Direct link to Define the graph")
We can now put it all together and define the graph!
import { StateGraph, END } from "@langchain/langgraph";// Define a new graphconst workflow = new StateGraph({ channels: agentState,});// Define the two nodes we will cycle betweenworkflow.addNode("agent", callModel);workflow.addNode("action", callTool);// Set the entrypoint as `agent`// This means that this node is the first one calledworkflow.setEntryPoint("agent");// We now add a conditional edgeworkflow.addConditionalEdges( // First, we define the start node. We use `agent`. // This means these are the edges taken after the `agent` node is called. "agent", // Next, we pass in the function that will determine which node is called next. shouldContinue, // Finally we pass in a mapping. // The keys are strings, and the values are other nodes. // END is a special node marking that the graph should finish. // What will happen is we will call `should_continue`, and then the output of that // will be matched against the keys in this mapping. // Based on which one it matches, that node will then be called. { // If `tools`, then we call the tool node. continue: "action", // Otherwise we finish. end: END, });// We now add a normal edge from `tools` to `agent`.// This means that after `tools` is called, `agent` node is called next.workflow.addEdge("action", "agent");// Finally, we compile it!// This compiles it into a LangChain Runnable,// meaning you can use it as you would any other runnableconst app = workflow.compile();
### Use it![](#use-it "Direct link to Use it!")
We can now use it! This now exposes the [same interface](/v0.2/docs/how_to/#langchain-expression-language-lcel) as all other LangChain runnables. This runnable accepts a list of messages.
import { HumanMessage } from "@langchain/core/messages";const inputs = { messages: [new HumanMessage("what is the weather in sf")],};const result = await app.invoke(inputs);
See a LangSmith trace of this run [here](https://smith.langchain.com/public/144af8a3-b496-43aa-ba9d-f0d5894196e2/r).
This may take a little bit - it's making a few calls behind the scenes. In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
LangGraph has support for several different types of streaming.
### Streaming Node Output[](#streaming-node-output "Direct link to Streaming Node Output")
One of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.
const inputs = { messages: [new HumanMessage("what is the weather in sf")],};for await (const output of await app.stream(inputs)) { console.log("output", output); console.log("-----\n");}
See a LangSmith trace of this run [here](https://smith.langchain.com/public/968cd1bf-0db2-410f-a5b4-0e73066cf06e/r).
Running Examples[](#running-examples "Direct link to Running Examples")
------------------------------------------------------------------------
You can find some more example notebooks of different use-cases in the `examples/` folder in this repo. These example notebooks use the [Deno runtime](https://deno.land/).
To pull in environment variables, you can create a `.env` file at the **root** of this repo (not in the `examples/` folder itself).
When to Use[](#when-to-use "Direct link to When to Use")
---------------------------------------------------------
When should you use this versus [LangChain Expression Language](/v0.2/docs/how_to/#langchain-expression-language-lcel)?
If you need cycles.
Langchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles. `langgraph` adds that syntax.
Examples[](#examples "Direct link to Examples")
------------------------------------------------
### ChatAgentExecutor: with function calling[](#chatagentexecutor-with-function-calling "Direct link to ChatAgentExecutor: with function calling")
This agent executor takes a list of messages as input and outputs a list of messages. All agent state is represented as a list of messages. This specifically uses OpenAI function calling. This is recommended agent executor for newer chat based models that support function calling.
* [Getting Started Notebook](https://github.com/langchain-ai/langgraphjs/tree/main/examples/chat_agent_executor_with_function_calling/base.ipynb): Walks through creating this type of executor from scratch
### AgentExecutor[](#agentexecutor "Direct link to AgentExecutor")
This agent executor uses existing LangChain agents.
* [Getting Started Notebook](https://github.com/langchain-ai/langgraphjs/tree/main/examples/agent_executor/base.ipynb): Walks through creating this type of executor from scratch
### Multi-agent Examples[](#multi-agent-examples "Direct link to Multi-agent Examples")
* [Multi-agent collaboration](https://github.com/langchain-ai/langgraphjs/tree/main/examples/multi_agent/multi_agent_collaboration.ipynb): how to create two agents that work together to accomplish a task
* [Multi-agent with supervisor](https://github.com/langchain-ai/langgraphjs/tree/main/examples/multi_agent/agent_supervisor.ipynb): how to orchestrate individual agents by using an LLM as a "supervisor" to distribute work
* [Hierarchical agent teams](https://github.com/langchain-ai/langgraphjs/tree/main/examples/multi_agent/hierarchical_agent_teams.ipynb): how to orchestrate "teams" of agents as nested graphs that can collaborate to solve a problem
Documentation[](#documentation "Direct link to Documentation")
---------------------------------------------------------------
There are only a few new APIs to use.
### StateGraph[](#stategraph "Direct link to StateGraph")
The main entrypoint is `StateGraph`.
import { StateGraph } from "@langchain/langgraph";
This class is responsible for constructing the graph. It exposes an interface inspired by [NetworkX](https://networkx.org/documentation/latest/). This graph is parameterized by a state object that it passes around to each node.
#### `constructor`[](#constructor "Direct link to constructor")
interface StateGraphArgs<T = any> { channels: Record< string, { value: BinaryOperator<T> | null; default?: () => T; } >;}class StateGraph<T> extends Graph { constructor(fields: StateGraphArgs<T>) {}
When constructing the graph, you need to pass in a schema for a state. Each node then returns operations to update that state. These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute. Whether to set or add is denoted by annotating the state object you construct the graph with.
Let's take a look at an example:
import { BaseMessage } from "@langchain/core/messages";const schema = { input: { value: null, }, agentOutcome: { value: null, }, steps: { value: (x: Array<BaseMessage>, y: Array<BaseMessage>) => x.concat(y), default: () => [], },};
We can then use this like:
// Initialize the StateGraph with this stateconst graph = new StateGraph({ channels: schema })// Create nodes and edges...// Compile the graphconst app = graph.compile()// The inputs should be an object, because the schema is an objectconst inputs = { // Let's assume this the input input: "hi" // Let's assume agent_outcome is set by the graph as some point // It doesn't need to be provided, and it will be null by default}
### `.addNode`[](#addnode "Direct link to addnode")
addNode(key: string, action: RunnableLike<RunInput, RunOutput>): void
This method adds a node to the graph. It takes two arguments:
* `key`: A string representing the name of the node. This must be unique.
* `action`: The action to take when this node is called. This should either be a function or a runnable.
### `.addEdge`[](#addedge "Direct link to addedge")
addEdge(startKey: string, endKey: string): void
Creates an edge from one node to the next. This means that output of the first node will be passed to the next node. It takes two arguments.
* `startKey`: A string representing the name of the start node. This key must have already been registered in the graph.
* `endKey`: A string representing the name of the end node. This key must have already been registered in the graph.
### `.addConditionalEdges`[](#addconditionaledges "Direct link to addconditionaledges")
addConditionalEdges( startKey: string, condition: CallableFunction, conditionalEdgeMapping: Record<string, string>): void
This method adds conditional edges. What this means is that only one of the downstream edges will be taken, and which one that is depends on the results of the start node. This takes three arguments:
* `startKey`: A string representing the name of the start node. This key must have already been registered in the graph.
* `condition`: A function to call to decide what to do next. The input will be the output of the start node. It should return a string that is present in `conditionalEdgeMapping` and represents the edge to take.
* `conditionalEdgeMapping`: A mapping of string to string. The keys should be strings that may be returned by `condition`. The values should be the downstream node to call if that condition is returned.
### `.setEntryPoint`[](#setentrypoint "Direct link to setentrypoint")
setEntryPoint(key: string): void
The entrypoint to the graph. This is the node that is first called. It only takes one argument:
* `key`: The name of the node that should be called first.
### `.setFinishPoint`[](#setfinishpoint "Direct link to setfinishpoint")
setFinishPoint(key: string): void
This is the exit point of the graph. When this node is called, the results will be the final result from the graph. It only has one argument:
* `key`: The name of the node that, when called, will return the results of calling it as the final output
Note: This does not need to be called if at any point you previously created an edge (conditional or normal) to `END`
### `END`[](#end "Direct link to end")
import { END } from "@langchain/langgraph";
This is a special node representing the end of the graph. This means that anything passed to this node will be the final output of the graph. It can be used in two places:
* As the `endKey` in `addEdge`
* As a value in `conditionalEdgeMapping` as passed to `addConditionalEdges`
When to Use[](#when-to-use-1 "Direct link to When to Use")
-----------------------------------------------------------
When should you use this versus [LangChain Expression Language](/v0.2/docs/how_to/#langchain-expression-language-lcel)?
If you need cycles.
Langchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles. `langgraph` adds that syntax.
Examples[](#examples-1 "Direct link to Examples")
--------------------------------------------------
### AgentExecutor[](#agentexecutor-1 "Direct link to AgentExecutor")
See the above Quick Start for an example of re-creating the LangChain [`AgentExecutor`](/v0.2/docs/how_to/agent_executor/) class.
### Forced Function Calling[](#forced-function-calling "Direct link to Forced Function Calling")
One simple modification of the above Graph is to modify it such that a certain tool is always called first. This can be useful if you want to enforce a certain tool is called, but still want to enable agentic behavior after the fact.
Assuming you have done the above Quick Start, you can build off it like:
#### Define the first tool call[](#define-the-first-tool-call "Direct link to Define the first tool call")
Here, we manually define the first tool call that we will make. Notice that it does that same thing as `agent` would have done (adds the `agentOutcome` key). This is so that we can easily plug it in.
import { AgentStep, AgentAction, AgentFinish } from "@langchain/core/agents";// Define the data type that the agent will return.type AgentData = { input: string; steps: Array<AgentStep>; agentOutcome?: AgentAction | AgentFinish;};const firstAgent = (inputs: AgentData) => { const newInputs = inputs; const action = { // We force call this tool tool: "tavily_search_results_json", // We just pass in the `input` key to this tool toolInput: newInputs.input, log: "", }; newInputs.agentOutcome = action; return newInputs;};
#### Create the graph[](#create-the-graph "Direct link to Create the graph")
We can now create a new graph with this new node
const workflow = new Graph();// Add the same nodes as before, plus this "first agent"workflow.addNode("firstAgent", firstAgent);workflow.addNode("agent", agent);workflow.addNode("tools", executeTools);// We now set the entry point to be this first agentworkflow.setEntryPoint("firstAgent");// We define the same edges as beforeworkflow.addConditionalEdges("agent", shouldContinue, { continue: "tools", exit: END,});workflow.addEdge("tools", "agent");// We also define a new edge, from the "first agent" to the tools node// This is so that we can call the toolworkflow.addEdge("firstAgent", "tools");// We now compile the graph as beforeconst chain = workflow.compile();
#### Use it![](#use-it-1 "Direct link to Use it!")
We can now use it as before! Depending on whether or not the first tool call is actually useful, this may save you an LLM call or two.
const result = await chain.invoke({ input: "what is the weather in sf", steps: [],});
You can see a LangSmith trace of this chain [here](https://smith.langchain.com/public/2e0a089f-8c05-405a-8404-b0a60b79a84a/r).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
walkthrough
](/v0.2/docs/langsmith/walkthrough)[
Next
Overview
](/v0.2/docs/versions/overview)
* [Overview](#overview)
* [Installation](#installation)
* [Quick start](#quick-start)
* [Interaction with LCEL](#interaction-with-lcel)
* [Conditional edges](#conditional-edges)
* [Cycles](#cycles)
* [Set up the tools](#set-up-the-tools)
* [Set up the model](#set-up-the-model)
* [Define the agent state](#define-the-agent-state)
* [Define the nodes](#define-the-nodes)
* [Define the graph](#define-the-graph)
* [Use it!](#use-it)
* [Streaming](#streaming)
* [Streaming Node Output](#streaming-node-output)
* [Running Examples](#running-examples)
* [When to Use](#when-to-use)
* [Examples](#examples)
* [ChatAgentExecutor: with function calling](#chatagentexecutor-with-function-calling)
* [AgentExecutor](#agentexecutor)
* [Multi-agent Examples](#multi-agent-examples)
* [Documentation](#documentation)
* [StateGraph](#stategraph)
* [`.addNode`](#addnode)
* [`.addEdge`](#addedge)
* [`.addConditionalEdges`](#addconditionaledges)
* [`.setEntryPoint`](#setentrypoint)
* [`.setFinishPoint`](#setfinishpoint)
* [`END`](#end)
* [When to Use](#when-to-use-1)
* [Examples](#examples-1)
* [AgentExecutor](#agentexecutor-1)
* [Forced Function Calling](#forced-function-calling)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/versions/overview | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Versions
* Overview
On this page
LangChain Over Time
===================
Due to the rapidly evolving field, LangChain has also evolved rapidly. This document serves to outline at a high level what has changed and why.
0.1[](#01 "Direct link to 0.1")
--------------------------------
The 0.1 release marked a few key changes for LangChain. By this point, the LangChain ecosystem had become large both in the breadth of what it enabled as well as the community behind it.
**Split of packages**
LangChain was split up into several packages to increase modularity and decrease bloat. First, `@langchain/core` is created as a lightweight core library containing the base abstractions, some core implementations of those abstractions, and the generic runtime for creating chains. Next, all third party integrations are split into `@langchain/community` or their own individual partner packages. Higher level chains and agents remain in `langchain`.
**`Runnables`**
Having a specific class for each chain was proving not very scalable or flexible. Although these classes were left alone (without deprecation warnings) for this release, in the documentation much more space was given to generic runnables.
< 0.1[](#-01 "Direct link to < 0.1")
-------------------------------------
There are several key characteristics of LangChain pre-0.1.
**Singular Package**
LangChain was largely a singular package. This meant that ALL integrations lived inside `langchain`.
**Chains as classes**
Most high level chains were largely their own classes. There was a base `Chain` class from which all chains inherited. This meant that in order to chain the logic inside a chain you basically had to modify the source code. There were a few chains that were meant to be more generic (`SequentialChain`, `RouterChain`)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
🦜🕸️LangGraph.js
](/v0.2/docs/langgraph)[
Next
v0.2
](/v0.2/docs/versions/v0_2)
* [0.1](#01)
* [< 0.1](#-01)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/tool_calling | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use a chat model to call tools
On this page
How to use a chat model to call tools
=====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Tools](/v0.2/docs/concepts/#tools)
info
We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message.
Tool calling allows a chat model to respond to a given prompt by “calling a tool”. While the name implies that the model is performing some action, this is actually not the case! The model generates the arguments to a tool, and actually running the tool (or not) is up to the user. For example, if you want to [extract output matching some schema](/v0.2/docs/how_to/structured_output/) from unstructured text, you could give the model an “extraction” tool that takes parameters matching the desired schema, then treat the generated output as your final result.
However, tool calling goes beyond [structured output](/v0.2/docs/how_to/structured_output/) since you can pass responses to caled tools back to the model to create longer interactions. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine with arguments. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools).
Tool calling is not universal, but many popular LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature.
LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls. This guide will show you how to use them.
Passing tools to LLMs[](#passing-tools-to-llms "Direct link to Passing tools to LLMs")
---------------------------------------------------------------------------------------
Chat models that support tool calling features implement a [`.bindTools()`](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html#bindTools) method, which receives a list of LangChain [tool objects](https://v02.api.js.langchain.com/classes/langchain_core_tools.StructuredTool.html) and binds them to the chat model in its expected format. Subsequent invocations of the chat model will include tool schemas in its calls to the LLM.
Let’s walk through a few examples:
### Pick your chat model:
* Anthropic
* OpenAI
* MistralAI
* FireworksAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic @langchain/core
yarn add @langchain/anthropic @langchain/core
pnpm add @langchain/anthropic @langchain/core
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai @langchain/core
yarn add @langchain/openai @langchain/core
pnpm add @langchain/openai @langchain/core
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai @langchain/core
yarn add @langchain/mistralai @langchain/core
pnpm add @langchain/mistralai @langchain/core
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community @langchain/core
yarn add @langchain/community @langchain/core
pnpm add @langchain/community @langchain/core
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
A number of models implement helper methods that will take care of formatting and binding different function-like objects to the model. Let’s take a look at how we might take the following Zod function schema and get different models to invoke it:
import { z } from "zod";/** * Note that the descriptions here are crucial, as they will be passed along * to the model along with the class name. */const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});
We can use the `.bindTools()` method to handle the conversion from LangChain tool to our model provider’s specific format and bind it to the model (i.e., passing it in each time the model is invoked). Let’s create a `DynamicStructuredTool` implementing a tool based on the above schema, then bind it to the model:
import { ChatOpenAI } from "@langchain/openai";import { DynamicStructuredTool } from "@langchain/core/tools";const calculatorTool = new DynamicStructuredTool({ name: "calculator", description: "Can perform mathematical operations.", schema: calculatorSchema, func: async ({ operation, number1, number2 }) => { // Functions must return strings if (operation === "add") { return `${number1 + number2}`; } else if (operation === "subtract") { return `${number1 - number2}`; } else if (operation === "multiply") { return `${number1 * number2}`; } else if (operation === "divide") { return `${number1 / number2}`; } else { throw new Error("Invalid operation."); } },});const llmWithTools = llm.bindTools([calculatorTool]);
Now, let’s invoke it! We expect the model to use the calculator to answer the question:
const res = await llmWithTools.invoke("What is 3 * 12");console.log(res.tool_calls);
[ { name: "calculator", args: { operation: "multiply", number1: 3, number2: 12 }, id: "call_Ri9s27J17B224FEHrFGkLdxH" }]
tip
See a LangSmith trace for the above [here](https://smith.langchain.com/public/14e4b50c-c6cf-4c53-b3ef-da550edb6d66/r).
We can see that the response message contains a `tool_calls` field when the model decides to call the tool. This will be in LangChain’s standardized format.
The `.tool_calls` attribute should contain valid tool calls. Note that on occasion, model providers may output malformed tool calls (e.g., arguments that are not valid JSON). When parsing fails in these cases, the message will contain instances of of [InvalidToolCall](https://v02.api.js.langchain.com/types/langchain_core_messages_tool.InvalidToolCall.html) objects in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have a name, string arguments, identifier, and error message.
### Streaming[](#streaming "Direct link to Streaming")
When tools are called in a streaming context, [message chunks](https://v02.api.js.langchain.com/classes/langchain_core_messages.BaseMessageChunk.html) will be populated with [tool call chunk](https://v02.api.js.langchain.com/types/langchain_core_messages_tool.ToolCallChunk.html) objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes optional string fields for the tool `name`, `args`, and `id`, and includes an optional integer field `index` that can be used to join chunks together. Fields are optional because portions of a tool call may be streamed across different chunks (e.g., a chunk that includes a substring of the arguments may have null values for the tool name and id).
Because message chunks inherit from their parent message class, an [AIMessageChunk](https://v02.api.js.langchain.com/classes/langchain_core_messages.AIMessageChunk.html) with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. These fields are parsed best-effort from the message’s tool call chunks.
Note that not all providers currently support streaming for tool calls. If this is the case for your specific provider, the model will yield a single chunk with the entire call when you call `.stream()`.
const stream = await llmWithTools.stream("What is 308 / 29");for await (const chunk of stream) { console.log(chunk.tool_call_chunks);}
[ { name: "calculator", args: "", id: "call_rGqPR1ivppYUeBb0iSAF8HGP", index: 0 }][ { name: undefined, args: '{"', id: undefined, index: 0 } ][ { name: undefined, args: "operation", id: undefined, index: 0 } ][ { name: undefined, args: '":"', id: undefined, index: 0 } ][ { name: undefined, args: "divide", id: undefined, index: 0 } ][ { name: undefined, args: '","', id: undefined, index: 0 } ][ { name: undefined, args: "number", id: undefined, index: 0 } ][ { name: undefined, args: "1", id: undefined, index: 0 } ][ { name: undefined, args: '":', id: undefined, index: 0 } ][ { name: undefined, args: "308", id: undefined, index: 0 } ][ { name: undefined, args: ',"', id: undefined, index: 0 } ][ { name: undefined, args: "number", id: undefined, index: 0 } ][ { name: undefined, args: "2", id: undefined, index: 0 } ][ { name: undefined, args: '":', id: undefined, index: 0 } ][ { name: undefined, args: "29", id: undefined, index: 0 } ][ { name: undefined, args: "}", id: undefined, index: 0 } ][]
Note that using the `concat` method on message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain’s various [tool output parsers](/v0.2/docs/how_to/output_parser_structured/) support streaming.
For example, below we accumulate tool call chunks:
const streamWithAccumulation = await llmWithTools.stream( "What is 32993 - 2339");let final;for await (const chunk of streamWithAccumulation) { if (!final) { final = chunk; } else { final = final.concat(chunk); }}console.log(final.tool_calls);
[ { name: "calculator", args: { operation: "subtract", number1: 32993, number2: 2339 }, id: "call_WMhL5X0fMBBZPNeyUZY53Xuw" }]
Few shotting with tools[](#few-shotting-with-tools "Direct link to Few shotting with tools")
---------------------------------------------------------------------------------------------
You can give the model examples of how you would like tools to be called in order to guide generation by inputting manufactured tool call turns. For example, given the above calculator tool, we could define a new operator, `🦜`. Let’s see what happens when we use it naively:
const res = await llmWithTools.invoke("What is 3 🦜 12");console.log(res.content);console.log(res.tool_calls);
It seems like you've used an emoji (🦜) in your expression, which I'm not familiar with in a mathematical context. Could you clarify what operation you meant by using the parrot emoji? For example, did you mean addition, subtraction, multiplication, or division?[]
It doesn’t quite know how to interpret `🦜` as an operation. Now, let’s try giving it an example in the form of a manufactured messages to steer it towards `divide`:
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";const res = await llmWithTools.invoke([ new HumanMessage("What is 333382 🦜 1932?"), new AIMessage({ content: "", tool_calls: [ { id: "12345", name: "calulator", args: { number1: 333382, number2: 1932, operation: "divide", }, }, ], }), new ToolMessage({ tool_call_id: "12345", content: "The answer is 172.558.", }), new AIMessage("The answer is 172.558."), new HumanMessage("What is 3 🦜 12"),]);console.log(res.tool_calls);
[ { name: "calculator", args: { operation: "divide", number1: 3, number2: 12 }, id: "call_BDuJv8QkDZ7N7Wsd6v5VDeVa" }]
Binding model-specific formats (advanced)[](#binding-model-specific-formats-advanced "Direct link to Binding model-specific formats (advanced)")
-------------------------------------------------------------------------------------------------------------------------------------------------
Providers adopt different conventions for formatting tool schemas. For instance, OpenAI uses a format like this:
* `type`: The type of the tool. At the time of writing, this is always “function”.
* `function`: An object containing tool parameters.
* `function.name`: The name of the schema to output.
* `function.description`: A high level description of the schema to output.
* `function.parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) object.
We can bind this model-specific format directly to the model if needed. Here’s an example:
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o" });const modelWithTools = model.bind({ tools: [ { type: "function", function: { name: "calculator", description: "Can perform mathematical operations.", parameters: { type: "object", properties: { operation: { type: "string", description: "The type of operation to execute.", enum: ["add", "subtract", "multiply", "divide"], }, number1: { type: "number", description: "First integer" }, number2: { type: "number", description: "Second integer" }, }, required: ["number1", "number2"], }, }, }, ],});await modelWithTools.invoke(`Whats 119 times 8?`);
AIMessage { lc_serializable: true, lc_kwargs: { content: "", tool_calls: [ { name: "calculator", args: { operation: "multiply", number1: 119, number2: 8 }, id: "call_pBlKOPNMRN4AAMkPaOKLLcyj" } ], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_pBlKOPNMRN4AAMkPaOKLLcyj", type: "function", function: [Object] } ] }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_pBlKOPNMRN4AAMkPaOKLLcyj", type: "function", function: { name: "calculator", arguments: '{"operation":"multiply","number1":119,"number2":8}' } } ] }, response_metadata: { tokenUsage: { completionTokens: 24, promptTokens: 85, totalTokens: 109 }, finish_reason: "tool_calls" }, tool_calls: [ { name: "calculator", args: { operation: "multiply", number1: 119, number2: 8 }, id: "call_pBlKOPNMRN4AAMkPaOKLLcyj" } ], invalid_tool_calls: []}
This is functionally equivalent to the `bind_tools()` calls above.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you’ve learned how to bind tool schemas to a chat model and to call those tools. Next, check out some more specific uses of tool calling:
* [Building tool-using chains and agents](/v0.2/docs/how_to/#tools)
* [Getting structured outputs from models](/v0.2/docs/how_to/structured_output/)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create a time-weighted retriever
](/v0.2/docs/how_to/time_weighted_vectorstore)[
Next
How to call tools with multi-modal data
](/v0.2/docs/how_to/tool_calls_multi_modal)
* [Passing tools to LLMs](#passing-tools-to-llms)
* [Streaming](#streaming)
* [Few shotting with tools](#few-shotting-with-tools)
* [Binding model-specific formats (advanced)](#binding-model-specific-formats-advanced)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/streaming | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream
On this page
How to stream
=============
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
Streaming is critical in making applications based on LLMs feel responsive to end-users.
Important LangChain primitives like LLMs, parsers, prompts, retrievers, and agents implement the LangChain Runnable Interface.
This interface provides two general approaches to stream content:
* `.stream()`: a default implementation of streaming that streams the final output from the chain.
* `streamEvents()` and `streamLog()`: these provide a way to stream both intermediate steps and final output from the chain.
Let’s take a look at both approaches!
Using Stream
============
All `Runnable` objects implement a method called stream.
These methods are designed to stream the final output in chunks, yielding each chunk as soon as it is available.
Streaming is only possible if all steps in the program know how to process an **input stream**; i.e., process an input chunk one at a time, and yield a corresponding output chunk.
The complexity of this processing can vary, from straightforward tasks like emitting tokens produced by an LLM, to more challenging ones like streaming parts of JSON results before the entire JSON is complete.
The best place to start exploring streaming is with the single most important components in LLM apps – the models themselves!
LLMs and Chat Models[](#llms-and-chat-models "Direct link to LLMs and Chat Models")
------------------------------------------------------------------------------------
Large language models can take several seconds to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.
The key strategy to make the application feel more responsive is to show intermediate progress; e.g., to stream the output from the model token by token.
import "dotenv/config";
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
// | output: false// | echo: falseimport { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o",});
const stream = await model.stream("Hello! Tell me about yourself.");const chunks = [];for await (const chunk of stream) { chunks.push(chunk); console.log(`${chunk.content}|`);}
|Hello|!| I'm| an| AI| language| model| created| by| Open|AI|,| designed| to| assist| with| a| wide| range| of| tasks| by| understanding| and| generating| human|-like| text| based| on| the| input| I| receive|.| I| can| help| answer| questions|,| provide| explanations|,| offer| advice|,| write| creatively|,| and| much| more|.| How| can| I| assist| you| today|?||
Let’s have a look at one of the raw chunks:
chunks[0];
AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: []}
We got back something called an `AIMessageChunk`. This chunk represents a part of an `AIMessage`.
Message chunks are additive by design – one can simply add them up using the `.concat()` method to get the state of the response so far!
let finalChunk = chunks[0];for (const chunk of chunks.slice(1, 5)) { finalChunk = finalChunk.concat(chunk);}finalChunk;
AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Hello! I'm an", additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_call_chunks: [], tool_calls: [], invalid_tool_calls: [] }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! I'm an", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: []}
Chains[](#chains "Direct link to Chains")
------------------------------------------
Virtually all LLM applications involve more steps than just a call to a language model.
Let’s build a simple chain using `LangChain Expression Language` (`LCEL`) that combines a prompt, model and a parser and verify that streaming works.
We will use `StringOutputParser` to parse the output from the model. This is a simple parser that extracts the content field from an `AIMessageChunk`, giving us the `token` returned by the model.
tip
LCEL is a declarative way to specify a “program” by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of stream, allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromTemplate("Tell me a joke about {topic}");const parser = new StringOutputParser();const chain = prompt.pipe(model).pipe(parser);const stream = await chain.stream({ topic: "parrot",});for await (const chunk of stream) { console.log(`${chunk}|`);}
|Sure|!| Here's| a| par|rot| joke| for| you|:|Why| did| the| par|rot| get| a| job|?|Because| he| was| tired| of| being| "|pol|ly|-em|ployment|!"| 🎉|🦜||
note
You do not have to use the `LangChain Expression Language` to use LangChain and can instead rely on a standard **imperative** programming approach by caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit.
If that works for your needs, then that’s fine by us 👌!
### Working with Input Streams[](#working-with-input-streams "Direct link to Working with Input Streams")
What if you wanted to stream JSON from the output as it was being generated?
If you were to rely on `JSON.parse` to parse the partial json, the parsing would fail as the partial json wouldn’t be valid json.
You’d likely be at a complete loss of what to do and claim that it wasn’t possible to stream JSON.
Well, turns out there is a way to do it - the parser needs to operate on the **input stream**, and attempt to “auto-complete” the partial json into a valid state.
Let’s see such a parser in action to understand what this means.
import { JsonOutputParser } from "@langchain/core/output_parsers";const chain = model.pipe(new JsonOutputParser());const stream = await chain.stream( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`);for await (const chunk of stream) { console.log(chunk);}
{ countries: [ { name: "France", population: 67372000 }, { name: "Spain", population: 47450795 }, { name: "Japan", population: 125960000 } ]}
Now, let’s **break** streaming. We’ll use the previous example and append an extraction function at the end that extracts the country names from the finalized JSON. Since this new last step is just a function call with no defined streaming behavior, the streaming output from previous steps is aggregated, then passed as a single input to the function.
danger
Any steps in the chain that operate on **finalized inputs** rather than on **input streams** can break streaming functionality via `stream`.
tip
Later, we will discuss the `streamEvents` API which streams results from intermediate steps. This API will stream results from intermediate steps even if the chain contains steps that only operate on **finalized inputs**.
// A function that operates on finalized inputs// rather than on an input_stream// A function that does not operates on input streams and breaks streaming.const extractCountryNames = (inputs: Record<string, any>) => { if (!Array.isArray(inputs.countries)) { return ""; } return JSON.stringify(inputs.countries.map((country) => country.name));};const chain = model.pipe(new JsonOutputParser()).pipe(extractCountryNames);const stream = await chain.stream( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`);for await (const chunk of stream) { console.log(chunk);}
["France","Spain","Japan"]
### Non-streaming components[](#non-streaming-components "Direct link to Non-streaming components")
Like the above example, some built-in components like Retrievers do not offer any streaming. What happens if we try to `stream` them?
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { ChatPromptTemplate } from "@langchain/core/prompts";const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const vectorstore = await MemoryVectorStore.fromTexts( ["mitochondria is the powerhouse of the cell", "buildings are made of brick"], [{}, {}], new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const chunks = [];for await (const chunk of await retriever.stream( "What is the powerhouse of the cell?")) { chunks.push(chunk);}console.log(chunks);
[ [ Document { pageContent: "mitochondria is the powerhouse of the cell", metadata: {} }, Document { pageContent: "buildings are made of brick", metadata: {} } ]]
Stream just yielded the final result from that component.
This is OK! Not all components have to implement streaming – in some cases streaming is either unnecessary, difficult or just doesn’t make sense.
tip
An LCEL chain constructed using some non-streaming components will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.
Here’s an example of this:
import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import type { Document } from "@langchain/core/documents";import { StringOutputParser } from "@langchain/core/output_parsers";const formatDocs = (docs: Document[]) => { return docs.map((doc) => doc.pageContent).join("\n-----\n");};const retrievalChain = RunnableSequence.from([ { context: retriever.pipe(formatDocs), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);const stream = await retrievalChain.stream( "What is the powerhouse of the cell?");for await (const chunk of stream) { console.log(`${chunk}|`);}
|M|ito|ch|ond|ria| is| the| powerhouse| of| the| cell|.||
Now that we’ve seen how the `stream` method works, let’s venture into the world of streaming events!
Using Stream Events[](#using-stream-events "Direct link to Using Stream Events")
---------------------------------------------------------------------------------
Event Streaming is a **beta** API. This API may change a bit based on feedback.
note
Introduced in @langchain/core **0.1.27**.
For the `streamEvents` method to work properly:
* Any custom functions / runnables must propragate callbacks
* Set proper parameters on models to force the LLM to stream tokens.
* Let us know if anything doesn’t work as expected!
### Event Reference[](#event-reference "Direct link to Event Reference")
Below is a reference table that shows some events that might be emitted by the various Runnable objects.
note
When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.
event
name
chunk
input
output
on\_llm\_start
\[model name\]
{‘input’: ‘hello’}
on\_llm\_stream
\[model name\]
‘Hello’ `or` AIMessageChunk(content=“hello”)
on\_llm\_end
\[model name\]
‘Hello human!’
{“generations”: \[…\], “llmOutput”: None, …}
on\_chain\_start
format\_docs
on\_chain\_stream
format\_docs
“hello world!, goodbye world!”
on\_chain\_end
format\_docs
\[Document(…)\]
“hello world!, goodbye world!”
on\_tool\_start
some\_tool
{“x”: 1, “y”: “2”}
on\_tool\_stream
some\_tool
{“x”: 1, “y”: “2”}
on\_tool\_end
some\_tool
{“x”: 1, “y”: “2”}
on\_retriever\_start
\[retriever name\]
{“query”: “hello”}
on\_retriever\_chunk
\[retriever name\]
{documents: \[…\]}
on\_retriever\_end
\[retriever name\]
{“query”: “hello”}
{documents: \[…\]}
on\_prompt\_start
\[template\_name\]
{“question”: “hello”}
on\_prompt\_end
\[template\_name\]
{“question”: “hello”}
ChatPromptValue(messages: \[SystemMessage, …\])
### Chat Model[](#chat-model "Direct link to Chat Model")
Let’s start off by looking at the events produced by a chat model.
const events = [];const eventStream = await model.streamEvents("hello", { version: "v1" });for await (const event of eventStream) { events.push(event);}
13
note
Hey what’s that funny version=“v1” parameter in the API?! 😾
This is a **beta API**, and we’re almost certainly going to make some changes to it.
This version parameter will allow us to mimimize such breaking changes to your code.
In short, we are annoying you now, so we don’t have to annoy you later.
Let’s take a look at the few of the start event and a few of the end events.
events.slice(0, 3);
[ { run_id: "3394874b-6a19-4d2c-a80f-bd3ff7f25e85", event: "on_llm_start", name: "ChatOpenAI", tags: [], metadata: {}, data: { input: "hello" } }, { event: "on_llm_stream", run_id: "3394874b-6a19-4d2c-a80f-bd3ff7f25e85", tags: [], metadata: {}, name: "ChatOpenAI", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }, { event: "on_llm_stream", run_id: "3394874b-6a19-4d2c-a80f-bd3ff7f25e85", tags: [], metadata: {}, name: "ChatOpenAI", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Hello", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }]
events.slice(-2);
[ { event: "on_llm_stream", run_id: "3394874b-6a19-4d2c-a80f-bd3ff7f25e85", tags: [], metadata: {}, name: "ChatOpenAI", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }, { event: "on_llm_end", name: "ChatOpenAI", run_id: "3394874b-6a19-4d2c-a80f-bd3ff7f25e85", tags: [], metadata: {}, data: { output: { generations: [ [Array] ] } } }]
### Chain[](#chain "Direct link to Chain")
Let’s revisit the example chain that parsed streaming JSON to explore the streaming events API.
const chain = model.pipe(new JsonOutputParser());const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" });const events = [];for await (const event of eventStream) { events.push(event);}
84
If you examine at the first few events, you’ll notice that there are **3** different start events rather than **2** start events.
The three start events correspond to:
1. The chain (model + parser)
2. The model
3. The parser
events.slice(0, 3);
[ { run_id: "289af8b8-7047-44e6-a475-26b88ddc7e34", event: "on_chain_start", name: "RunnableSequence", tags: [], metadata: {}, data: { input: "Output a list of the countries france, spain and japan and their populations in JSON format. Use a d"... 129 more characters } }, { event: "on_llm_start", name: "ChatOpenAI", run_id: "d43b539d-23ae-42ad-9bec-64faf58cf423", tags: [ "seq:step:1" ], metadata: {}, data: { input: { messages: [ [Array] ] } } }, { event: "on_parser_start", name: "JsonOutputParser", run_id: "91b6f786-0838-4888-8c2d-25ecd5d62d47", tags: [ "seq:step:2" ], metadata: {}, data: {} }]
What do you think you’d see if you looked at the last 3 events? what about the middle?
Let’s use this API to take output the stream events from the model and the parser. We’re ignoring start events, end events and events from the chain.
let eventCount = 0;const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" });for await (const event of eventStream) { // Truncate the output if (eventCount > 30) { continue; } const eventType = event.event; if (eventType === "on_llm_stream") { console.log(`Chat model chunk: ${event.data.chunk.message.content}`); } else if (eventType === "on_parser_stream") { console.log(`Parser chunk: ${JSON.stringify(event.data.chunk)}`); } eventCount += 1;}
Chat model chunk:Chat model chunk: ```Chat model chunk: jsonChat model chunk:Chat model chunk: {Chat model chunk:Chat model chunk: "Chat model chunk: countriesChat model chunk: ":Chat model chunk: [Chat model chunk:Chat model chunk: {Chat model chunk:Chat model chunk: "Chat model chunk: nameChat model chunk: ":Chat model chunk: "Chat model chunk: FranceChat model chunk: ",Chat model chunk:Chat model chunk: "Chat model chunk: populationChat model chunk: ":Chat model chunk:Chat model chunk: 673Chat model chunk: 480Chat model chunk: 00Chat model chunk:
Because both the model and the parser support streaming, we see streaming events from both components in real time! Neat! 🦜
### Filtering Events[](#filtering-events "Direct link to Filtering Events")
Because this API produces so many events, it is useful to be able to filter on events.
You can filter by either component `name`, component `tags` or component `type`.
#### By Name[](#by-name "Direct link to By Name")
const chain = model .withConfig({ runName: "model" }) .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" }));const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, { includeNames: ["my_parser"] });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1;}
{ event: "on_parser_start", name: "my_parser", run_id: "bd05589a-0725-486b-b814-81af62ba5d80", tags: [ "seq:step:2" ], metadata: {}, data: {}}{ event: "on_parser_stream", name: "my_parser", run_id: "bd05589a-0725-486b-b814-81af62ba5d80", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France", population: 65273511 }, { name: "Spain", population: 46754778 }, { name: "Japan", population: 126476461 } ] } }}{ event: "on_parser_end", name: "my_parser", run_id: "bd05589a-0725-486b-b814-81af62ba5d80", tags: [ "seq:step:2" ], metadata: {}, data: { output: { countries: [ { name: "France", population: 65273511 }, { name: "Spain", population: 46754778 }, { name: "Japan", population: 126476461 } ] } }}
3
#### By type[](#by-type "Direct link to By type")
const chain = model .withConfig({ runName: "model" }) .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" }));const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, { includeTypes: ["llm"] });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1;}
{ event: "on_llm_start", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { input: { messages: [ [ [HumanMessage] ] ] } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "Sure", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Sure", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Sure", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: ",", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: ",", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: ",", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " here's", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " here's", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " here's", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " the", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " the", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " the", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " JSON", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " JSON", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " JSON", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " representation", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " representation", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " representation", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " of", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " of", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " of", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " the", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " the", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " the", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "model", run_id: "4c7dbe4a-57ea-40b9-9fc0-a77d7851c5fd", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " countries", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " countries", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " countries", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}
#### By Tags[](#by-tags "Direct link to By Tags")
caution
Tags are inherited by child components of a given runnable.
If you’re using tags to filter, make sure that this is what you want.
const chain = model .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" })) .withConfig({ tags: ["my_chain"] });const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, { includeTags: ["my_chain"] });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1;}
{ run_id: "83f4ef67-4970-44f7-8ae1-5ebace8cbce0", event: "on_chain_start", name: "RunnableSequence", tags: [ "my_chain" ], metadata: {}, data: { input: "Output a list of the countries france, spain and japan and their populations in JSON format. Use a d"... 129 more characters }}{ event: "on_llm_start", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { input: { messages: [ [ [HumanMessage] ] ] } }}{ event: "on_parser_start", name: "my_parser", run_id: "346234e7-b109-4bf7-a568-70edd67bc209", tags: [ "seq:step:2", "my_chain" ], metadata: {}, data: {}}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "```", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "```", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "```", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "json", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "json", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "json", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "\n", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "\n", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "\n", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "{\n", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "{\n", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "{\n", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " ", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " ", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " ", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: ' "', generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: ' "', tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: ' "', name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "d84bca89-bc85-4d1d-a7af-3403f5789bd0", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "countries", generationInfo: { prompt: 0, completion: 0, finish_reason: null }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "countries", tool_call_chunks: [], additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "countries", name: undefined, additional_kwargs: {}, response_metadata: { prompt: 0, completion: 0, finish_reason: null }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } } }}
### Streaming events over HTTP[](#streaming-events-over-http "Direct link to Streaming events over HTTP")
For convenience, `streamEvents` supports encoding streamed intermediate events as HTTP [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events), encoded as bytes. Here’s what that looks like (using a [`TextDecoder`](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder) to reconvert the binary data back into a human readable string):
const chain = model .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" })) .withConfig({ tags: ["my_chain"] });const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1", encoding: "text/event-stream", });let eventCount = 0;const textDecoder = new TextDecoder();for await (const event of eventStream) { // Truncate the output if (eventCount > 3) { continue; } console.log(textDecoder.decode(event)); eventCount += 1;}
event: datadata: {"run_id":"9344be82-f4e6-49be-9eea-88eb2ae53340","event":"on_chain_start","name":"RunnableSequence","tags":["my_chain"],"metadata":{},"data":{"input":"Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key \"name\" and \"population\""}}event: datadata: {"event":"on_llm_start","name":"ChatOpenAI","run_id":"20640210-4b45-4ac3-9e5e-ad6e6d48431f","tags":["seq:step:1","my_chain"],"metadata":{},"data":{"input":{"messages":[[{"lc":1,"type":"constructor","id":["langchain_core","messages","HumanMessage"],"kwargs":{"content":"Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key \"name\" and \"population\"","additional_kwargs":{},"response_metadata":{}}}]]}}}event: datadata: {"event":"on_parser_start","name":"my_parser","run_id":"0d035118-36bc-49a3-9bdd-5fcf8afcc5da","tags":["seq:step:2","my_chain"],"metadata":{},"data":{}}event: datadata: {"event":"on_llm_stream","name":"ChatOpenAI","run_id":"20640210-4b45-4ac3-9e5e-ad6e6d48431f","tags":["seq:step:1","my_chain"],"metadata":{},"data":{"chunk":{"text":"","generationInfo":{"prompt":0,"completion":0,"finish_reason":null},"message":{"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessageChunk"],"kwargs":{"content":"","tool_call_chunks":[],"additional_kwargs":{},"tool_calls":[],"invalid_tool_calls":[],"response_metadata":{"prompt":0,"completion":0,"finish_reason":null}}}}}}
A nice feature of this format is that you can pass the resulting stream directly into a native [HTTP response object](https://developer.mozilla.org/en-US/docs/Web/API/Response) with the correct headers (commonly used by frameworks like [Hono](https://hono.dev/) and [Next.js](https://nextjs.org/)), then parse that stream on the frontend. Your server-side handler would look something like this:
const handler = async () => { const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1", encoding: "text/event-stream", } ); return new Response(eventStream, { headers: { "content-type": "text/event-stream", }, });};
And your frontend could look like this (using the [`@microsoft/fetch-event-source`](https://www.npmjs.com/package/@microsoft/fetch-event-source) pacakge to fetch and parse the event source):
import { fetchEventSource } from "@microsoft/fetch-event-source";const makeChainRequest = async () => { await fetchEventSource("https://your_url_here", { method: "POST", body: JSON.stringify({ foo: "bar", }), onmessage: (message) => { if (message.event === "data") { console.log(message.data); } }, onerror: (err) => { console.log(err); }, });};
### Non-streaming components[](#non-streaming-components-1 "Direct link to Non-streaming components")
Remember how some components don’t stream well because they don’t operate on **input streams**?
While such components can break streaming of the final output when using `stream`, `streamEvents` will still yield streaming events from intermediate steps that support streaming!
// A function that operates on finalized inputs// rather than on an input_stream// A function that does not operates on input streams and breaks streaming.const extractCountryNames = (inputs: Record<string, any>) => { if (!Array.isArray(inputs.countries)) { return ""; } return JSON.stringify(inputs.countries.map((country) => country.name));};const chain = model.pipe(new JsonOutputParser()).pipe(extractCountryNames);const stream = await chain.stream( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`);for await (const chunk of stream) { console.log(chunk);}
["France","Spain","Japan"]
As expected, the `stream` API doesn’t work correctly because `extractCountryNames` doesn’t operate on streams.
Now, let’s confirm that with `streamEvents` we’re still seeing streaming output from the model and the parser.
const eventStream = await chain.streamEvents( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 30) { continue; } const eventType = event.event; if (eventType === "on_llm_stream") { console.log(`Chat model chunk: ${event.data.chunk.message.content}`); } else if (eventType === "on_parser_stream") { console.log(`Parser chunk: ${JSON.stringify(event.data.chunk)}`); } eventCount += 1;}
Chat model chunk:Chat model chunk: Here'sChat model chunk: howChat model chunk: youChat model chunk: canChat model chunk: representChat model chunk: theChat model chunk: countriesChat model chunk: FranceChat model chunk: ,Chat model chunk: SpainChat model chunk: ,Chat model chunk: andChat model chunk: JapanChat model chunk: ,Chat model chunk: alongChat model chunk: withChat model chunk: theirChat model chunk: populationsChat model chunk: ,Chat model chunk: inChat model chunk: JSONChat model chunk: formatChat model chunk: :Chat model chunk: ```Chat model chunk: jsonChat model chunk:Chat model chunk: {
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do query validation
](/v0.2/docs/how_to/sql_query_checking)[
Next
How to create a time-weighted retriever
](/v0.2/docs/how_to/time_weighted_vectorstore)
* [LLMs and Chat Models](#llms-and-chat-models)
* [Chains](#chains)
* [Working with Input Streams](#working-with-input-streams)
* [Non-streaming components](#non-streaming-components)
* [Using Stream Events](#using-stream-events)
* [Event Reference](#event-reference)
* [Chat Model](#chat-model)
* [Chain](#chain)
* [Filtering Events](#filtering-events)
* [Streaming events over HTTP](#streaming-events-over-http)
* [Non-streaming components](#non-streaming-components-1)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/versions/release_policy | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Versions
* Release Policy
On this page
LangChain releases
==================
The LangChain ecosystem is composed of different component packages (e.g., `@langchain/core`, `langchain`, `@langchain/community`, `@langchain/langgraph`, partner packages etc.)
Versioning[](#versioning "Direct link to Versioning")
------------------------------------------------------
### `langchain` and `@langchain/core`[](#langchain-and-langchaincore "Direct link to langchain-and-langchaincore")
`langchain` and `@langchain/core` follow [semantic versioning](https://semver.org/) in the format of 0.**Y**.**Z**. The packages are under rapid development, and so are currently versioning the packages with a major version of 0.
Minor version increases will occur for:
* Breaking changes for any public interfaces marked as `beta`.
Patch version increases will occur for:
* Bug fixes
* New features
* Any changes to private interfaces
* Any changes to `beta` features
When upgrading between minor versions, users should review the list of breaking changes and deprecations.
From time to time, we will version packages as **release candidates**. These are versions that are intended to be released as stable versions, but we want to get feedback from the community before doing so. Release candidates will be versioned as 0.**Y**.**Z**\-rc**.N**. For example, `0.2.0-rc.1`. If no issues are found, the release candidate will be released as a stable version with the same version number. \\If issues are found, we will release a new release candidate with an incremented `N` value (e.g., `0.2.0-rc.2`).
### Other packages in the langchain ecosystem[](#other-packages-in-the-langchain-ecosystem "Direct link to Other packages in the langchain ecosystem")
Other packages in the ecosystem (including user packages) can follow a different versioning scheme, but are generally expected to pin to specific minor versions of `langchain` and `@langchain/core`.
Release cadence[](#release-cadence "Direct link to Release cadence")
---------------------------------------------------------------------
We expect to space out **minor** releases (e.g., from 0.2.0 to 0.3.0) of `langchain` and `@langchain/core` by at least 2-3 months, as such releases may contain breaking changes.
Patch versions are released frequently as they contain bug fixes and new features.
API stability[](#api-stability "Direct link to API stability")
---------------------------------------------------------------
The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `@langchain/core` will continue to evolve to better serve the needs of our users.
Even though both `langchain` and `@langchain/core` are currently in a pre-1.0 state, we are committed to maintaining API stability in these packages.
* Breaking changes to the public API will result in a minor version bump (the second digit)
* Any bug fixes or new features will result in a patch version bump (the third digit)
We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed.
### Stability of other packages[](#stability-of-other-packages "Direct link to Stability of other packages")
The stability of other packages in the LangChain ecosystem may vary:
* `@langchain/community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `@langchain/community`, `@langchain/community` is expected to experience more breaking changes than `langchain` and `@langchain/core` as it contains many community contributions.
* Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable.
### What is a "API stability"?[](#what-is-a-api-stability "Direct link to What is a "API stability"?")
API stability means:
* All the public APIs (everything in this documentation) will not be moved or renamed without providing backwards-compatible aliases.
* If new features are added to these APIs – which is quite possible – they will not break or change the meaning of existing methods. In other words, "stable" does not (necessarily) mean "complete."
* If, for some reason, an API declared stable must be removed or replaced, it will be declared deprecated but will remain in the API for at least two minor releases. Warnings will be issued when the deprecated method is called.
### **APIs marked as internal**[](#apis-marked-as-internal "Direct link to apis-marked-as-internal")
Certain APIs are explicitly marked as “internal” in a couple of ways:
* Some documentation refers to internals and mentions them as such. If the documentation says that something is internal, it may change.
* Functions, methods, and other objects prefixed by a leading underscore (**`_`**). If any method starts with a single **`_`**, it’s an internal API.
* **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are _meant_ to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain.
Deprecation policy[](#deprecation-policy "Direct link to Deprecation policy")
------------------------------------------------------------------------------
We will generally avoid deprecating features until a better alternative is available.
When a feature is deprecated, it will continue to work in the current and next minor version of `langchain` and `@langchain/core`. After that, the feature will be removed.
Since we're expecting to space out minor releases by at least 2-3 months, this means that a feature can be removed within 2-6 months of being deprecated.
In some situations, we may allow the feature to remain in the code base for longer periods of time, if it's not causing issues in the packages, to reduce the burden on users.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
v0.2
](/v0.2/docs/versions/v0_2)[
Next
Packages
](/v0.2/docs/versions/packages)
* [Versioning](#versioning)
* [`langchain` and `@langchain/core`](#langchain-and-langchaincore)
* [Other packages in the langchain ecosystem](#other-packages-in-the-langchain-ecosystem)
* [Release cadence](#release-cadence)
* [API stability](#api-stability)
* [Stability of other packages](#stability-of-other-packages)
* [What is a "API stability"?](#what-is-a-api-stability)
* [**APIs marked as internal**](#apis-marked-as-internal)
* [Deprecation policy](#deprecation-policy)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/versions/v0_2 | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Versions
* v0.2
On this page
LangChain v0.2
==============
LangChain v0.2 was released in May 2024. This release includes a number of breaking changes and deprecations. This document contains a guide on upgrading to 0.2.x, as well as a list of deprecations and breaking changes.
Migration[](#migration "Direct link to Migration")
---------------------------------------------------
This documentation will help you upgrade your code to LangChain `0.2.x.`. To prepare for migration, we first recommend you take the following steps:
1. install the 0.2.x versions of `@langchain/core`, langchain and upgrade to recent versions of other packages that you may be using (e.g. `@langchain/langgraph`, `@langchain/community`, `@langchain/openai`, etc.)
2. Verify that your code runs properly with the new packages (e.g., unit tests pass)
3. Install a recent version of `langchain-cli` , and use the tool to replace old imports used by your code with the new imports. (See instructions below.)
4. Manually resolve any remaining deprecation warnings
5. Re-run unit tests
### Upgrade to new imports[](#upgrade-to-new-imports "Direct link to Upgrade to new imports")
We created a tool to help migrate your code. This tool is still in **beta** and may not cover all cases, but we hope that it will help you migrate your code more quickly.
The migration script has the following limitations:
1. It's limited to helping users move from old imports to new imports. It doesn't help address other deprecations.
2. It can't handle imports that involve `as` .
3. New imports are always placed in global scope, even if the old import that was replaced was located inside some local scope (e..g, function body).
4. It will likely miss some deprecated imports.
Here is an example of the import changes that the migration script can help apply automatically:
From Package
To Package
Deprecated Import
New Import
`langchain`
`@langchain/community`
`import { UpstashVectorStore } from "langchain/vectorstores/upstash"`
`import { UpstashVectorStore } from "@langchain/community/vectorstores/upstash"`
`@langchain/community`
`@langchain/openai`
`import { ChatOpenAI } from "@langchain/community/chat_models/openai"`
`import { ChatOpenAI } from "@langchain/openai"`
`langchain`
`@langchain/core`
`import { Document } from "langchain/schema/document"`
`import { Document } from "@langchain/core/documents"`
`langchain`
`@langchain/textsplitters`
`import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"`
`import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters"`
#### Deprecation timeline[](#deprecation-timeline "Direct link to Deprecation timeline")
We have two main types of deprecations:
1. Code that was moved from `langchain` into another package (e.g, `@langchain/community`)
If you try to import it from `langchain`, it will fail since the entrypoint has been removed.
2. Code that has better alternatives available and will eventually be removed, so there's only a single way to do things. (e.g., `predictMessages` method in ChatModels has been deprecated in favor of `invoke`).
Many of these were marked for removal in 0.2. We have bumped the removal to 0.3.
#### Installation[](#installation "Direct link to Installation")
note
The 0.2.X migration script is only available in version `0.0.14-rc.1` or later.
* npm
* Yarn
* pnpm
npm i @langchain/scripts@0.0.14-rc.1
yarn add @langchain/scripts@0.0.14-rc.1
pnpm add @langchain/scripts@0.0.14-rc.1
#### Usage[](#usage "Direct link to Usage")
Given that the migration script is not perfect, you should make sure you have a backup of your code first (e.g., using version control like `git`).
For example, say your code still uses `import ChatOpenAI from "@langchain/community/chat_models/openai";`:
Invoking the migration script will replace this import with `import ChatOpenAI from "@langchain/openai";`.
import { updateEntrypointsFrom0_x_xTo0_2_x } from "@langchain/scripts/migrations";const pathToMyProject = "..."; // This path is used in the following glob pattern: `${projectPath}/**/*.{ts,tsx,js,jsx}`.updateEntrypointsFrom0_x_xTo0_2_x({ projectPath: pathToMyProject, shouldLog: true,});
#### Other options[](#other-options "Direct link to Other options")
updateEntrypointsFrom0_x_xTo0_2_x({ projectPath: pathToMyProject, tsConfigPath: "tsconfig.json", // Path to the tsConfig file. This will be used to load all the project files into the script. testRun: true, // If true, the script will not save any changes, but will log the changes that would be made. files: ["..."], // A list of .ts file paths to check. If this is provided, the script will only check these files.});
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Overview
](/v0.2/docs/versions/overview)[
Next
Release Policy
](/v0.2/docs/versions/release_policy)
* [Migration](#migration)
* [Upgrade to new imports](#upgrade-to-new-imports)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/versions/packages | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Versions
* Packages
On this page
📕 Package versioning
=====================
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a maintainer and published to [NPM](https://npm.org/). The different packages are versioned slightly differently.
`@langchain/core`[](#langchaincore "Direct link to langchaincore")
-------------------------------------------------------------------
`@langchain/core` is currently on version `0.1.x`.
As `@langchain/core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything marked as `beta` (you can see this in the API reference and will see warnings when using such functionality). The reason for beta features is that given the rate of change of the field, being able to move quickly is still a priority.
Minor version increases will occur for:
* Breaking changes for any public interfaces marked as `beta`.
Patch version increases will occur for:
* Bug fixes
* New features
* Any changes to private interfaces
* Any changes to `beta` features
`langchain`[](#langchain "Direct link to langchain")
-----------------------------------------------------
`langchain` is currently on version `0.2.x`
Minor version increases will occur for:
* Breaking changes for any public interfaces NOT marked as `beta`.
Patch version increases will occur for:
* Bug fixes
* New features
* Any changes to private interfaces
* Any changes to `beta` features.
`@langchain/community`[](#langchaincommunity "Direct link to langchaincommunity")
----------------------------------------------------------------------------------
`@langchain/community` is currently on version `0.2.x`
All changes will be accompanied by the same type of version increase as changes in `langchain`.
Partner Packages[](#partner-packages "Direct link to Partner Packages")
------------------------------------------------------------------------
Partner packages are versioned independently.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Release Policy
](/v0.2/docs/versions/release_policy)[
Next
Security
](/v0.2/docs/security)
* [`@langchain/core`](#langchaincore)
* [`langchain`](#langchain)
* [`@langchain/community`](#langchaincommunity)
* [Partner Packages](#partner-packages)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/document_loader_custom | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to write a custom document loader
On this page
How to write a custom document loader
=====================================
If you want to implement your own Document Loader, you have a few options.
### Subclassing `BaseDocumentLoader`[](#subclassing-basedocumentloader "Direct link to subclassing-basedocumentloader")
You can extend the `BaseDocumentLoader` class directly. The `BaseDocumentLoader` class provides a few convenience methods for loading documents from a variety of sources.
abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}
### Subclassing `TextLoader`[](#subclassing-textloader "Direct link to subclassing-textloader")
If you want to load documents from a text file, you can extend the `TextLoader` class. The `TextLoader` class takes care of reading the file, so all you have to do is implement a parse method.
abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}
### Subclassing `BufferLoader`[](#subclassing-bufferloader "Direct link to subclassing-bufferloader")
If you want to load documents from a binary file, you can extend the `BufferLoader` class. The `BufferLoader` class takes care of reading the file, so all you have to do is implement a parse method.
abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load CSV data
](/v0.2/docs/how_to/document_loader_csv)[
Next
How to load data from a directory
](/v0.2/docs/how_to/document_loader_directory)
* [Subclassing `BaseDocumentLoader`](#subclassing-basedocumentloader)
* [Subclassing `TextLoader`](#subclassing-textloader)
* [Subclassing `BufferLoader`](#subclassing-bufferloader)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/contextual_compression | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do retrieval with contextual compression
On this page
How to do retrieval with contextual compression
===============================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.
To use the Contextual Compression Retriever, you'll need:
* a base retriever
* a Document Compressor
The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.
Using a vanilla vector store retriever[](#using-a-vanilla-vector-store-retriever "Direct link to Using a vanilla vector store retriever")
------------------------------------------------------------------------------------------------------------------------------------------
Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). Given an example question, our retriever returns one or two relevant docs and a few irrelevant docs, and even the relevant docs have a lot of irrelevant information in them. To extract all the context we can, we use an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import * as fs from "fs";import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { ContextualCompressionRetriever } from "langchain/retrievers/contextual_compression";import { LLMChainExtractor } from "langchain/retrievers/document_compressors/chain_extract";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct",});const baseCompressor = LLMChainExtractor.fromLLM(model);const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const retriever = new ContextualCompressionRetriever({ baseCompressor, baseRetriever: vectorStore.asRetriever(),});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: 'One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata: [Object] }, Document { pageContent: '"Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', metadata: [Object] }, Document { pageContent: 'The onslaught of state laws targeting transgender Americans and their families is wrong.', metadata: [Object] } ] }*/
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [ContextualCompressionRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html) from `langchain/retrievers/contextual_compression`
* [LLMChainExtractor](https://v02.api.js.langchain.com/classes/langchain_retrievers_document_compressors_chain_extract.LLMChainExtractor.html) from `langchain/retrievers/document_compressors/chain_extract`
`EmbeddingsFilter`[](#embeddingsfilter "Direct link to embeddingsfilter")
--------------------------------------------------------------------------
Making an extra LLM call over each retrieved document is expensive and slow. The `EmbeddingsFilter` provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
This is most useful for non-vector store retrievers where we may not have control over the returned chunk size, or as part of a pipeline, outlined below.
Here's an example:
import * as fs from "fs";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";import { ContextualCompressionRetriever } from "langchain/retrievers/contextual_compression";import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter";const baseCompressor = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), similarityThreshold: 0.8,});const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const retriever = new ContextualCompressionRetriever({ baseCompressor, baseRetriever: vectorStore.asRetriever(),});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: 'And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. \n' + '\n' + 'A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n' + '\n' + 'And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n' + '\n' + 'We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n' + '\n' + 'We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n' + '\n' + 'We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.', metadata: [Object] }, Document { pageContent: 'In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n' + '\n' + 'We cannot let this happen. \n' + '\n' + 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n' + '\n' + 'Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n' + '\n' + 'One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n' + '\n' + 'And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata: [Object] } ] }*/
#### API Reference:
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ContextualCompressionRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html) from `langchain/retrievers/contextual_compression`
* [EmbeddingsFilter](https://v02.api.js.langchain.com/classes/langchain_retrievers_document_compressors_embeddings_filter.EmbeddingsFilter.html) from `langchain/retrievers/document_compressors/embeddings_filter`
Stringing compressors and document transformers together[](#stringing-compressors-and-document-transformers-together "Direct link to Stringing compressors and document transformers together")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitters` can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsFilter` can be used to filter out documents based on similarity of the individual chunks to the input query.
Below we create a compressor pipeline by first splitting raw webpage documents retrieved from the [Tavily web search API retriever](/v0.2/docs/integrations/retrievers/tavily) into smaller chunks, then filtering based on relevance to the query. The result is smaller chunks that are semantically similar to the input query. This skips the need to add documents to a vector store to perform similarity search, which can be useful for one-off use cases:
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { OpenAIEmbeddings } from "@langchain/openai";import { ContextualCompressionRetriever } from "langchain/retrievers/contextual_compression";import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter";import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";import { DocumentCompressorPipeline } from "langchain/retrievers/document_compressors";const embeddingsFilter = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), similarityThreshold: 0.8, k: 5,});const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 200, chunkOverlap: 0,});const compressorPipeline = new DocumentCompressorPipeline({ transformers: [textSplitter, embeddingsFilter],});const baseRetriever = new TavilySearchAPIRetriever({ includeRawContent: true,});const retriever = new ContextualCompressionRetriever({ baseCompressor: compressorPipeline, baseRetriever,});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer in the 2022 State of the Union?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: 'Justice Stephen Breyer talks to President Joe Biden ahead of the State of the Union address on Tuesday. (jabin botsford/Agence France-Presse/Getty Images)', metadata: [Object] }, Document { pageContent: 'President Biden recognized outgoing US Supreme Court Justice Stephen Breyer during his State of the Union on Tuesday.', metadata: [Object] }, Document { pageContent: 'What we covered here\n' + 'Biden recognized outgoing Supreme Court Justice Breyer during his speech', metadata: [Object] }, Document { pageContent: 'States Supreme Court. Justice Breyer, thank you for your service,” the president said.', metadata: [Object] }, Document { pageContent: 'Court," Biden said. "Justice Breyer, thank you for your service."', metadata: [Object] } ] }*/
#### API Reference:
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ContextualCompressionRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html) from `langchain/retrievers/contextual_compression`
* [EmbeddingsFilter](https://v02.api.js.langchain.com/classes/langchain_retrievers_document_compressors_embeddings_filter.EmbeddingsFilter.html) from `langchain/retrievers/document_compressors/embeddings_filter`
* [TavilySearchAPIRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_tavily_search_api.TavilySearchAPIRetriever.html) from `@langchain/community/retrievers/tavily_search_api`
* [DocumentCompressorPipeline](https://v02.api.js.langchain.com/classes/langchain_retrievers_document_compressors.DocumentCompressorPipeline.html) from `langchain/retrievers/document_compressors`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned a few ways to use contextual compression to remove bad data from your results.
See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split code
](/v0.2/docs/how_to/code_splitter)[
Next
How to write a custom retriever class
](/v0.2/docs/how_to/custom_retriever)
* [Using a vanilla vector store retriever](#using-a-vanilla-vector-store-retriever)
* [`EmbeddingsFilter`](#embeddingsfilter)
* [Stringing compressors and document transformers together](#stringing-compressors-and-document-transformers-together)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/custom_tools | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create custom Tools
On this page
How to create custom Tools
==========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain tools](/v0.2/docs/concepts#tools)
* [Agents](/v0.2/docs/concepts/#agents)
When constructing your own agent, you will need to provide it with a list of Tools that it can use. While LangChain includes some prebuilt tools, it can often be more useful to use tools that use custom logic. This guide will walk you through how to use these `Dynamic` tools.
In this guide, we will walk through how to do define a tool for two functions:
1. A multiplier function that will multiply two numbers by each other
2. A made up search function that always returns the string “LangChain”
The biggest difference here is that the first function requires an object with multiple input fields, while the second one only accepts an object with a single field. Some older agents only work with functions that require single inputs, so it’s important to understand the distinction.
`DynamicStructuredTool`[](#dynamicstructuredtool "Direct link to dynamicstructuredtool")
-----------------------------------------------------------------------------------------
Newer and more advanced agents can handle more flexible tools that take multiple inputs. You can use the [`DynamicStructuredTool`](https://v02.api.js.langchain.com/classes/langchain_core_tools.DynamicStructuredTool.html) class to declare them. Here’s an example - note that tools must always return strings!
import { DynamicStructuredTool } from "@langchain/core/tools";import { z } from "zod";const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "multiply two numbers together", schema: z.object({ a: z.number().describe("the first number to multiply"), b: z.number().describe("the second number to multiply"), }), func: async ({ a, b }: { a: number; b: number }) => { return (a * b).toString(); },});await multiplyTool.invoke({ a: 8, b: 9 });
"72"
`DynamicTool`[](#dynamictool "Direct link to dynamictool")
-----------------------------------------------------------
For older agents that require tools which accept only a single input, you can pass the relevant parameters to the [`DynamicTool`](https://v02.api.js.langchain.com/classes/langchain_core_tools.DynamicTool.html) class. This is useful when working with older agents that only support tools that accept a single input. In this case, no schema is required:
import { DynamicTool } from "@langchain/core/tools";const searchTool = new DynamicTool({ name: "search", description: "look things up online", func: async (_input: string) => { return "LangChain"; },});await searchTool.invoke("foo");
"LangChain"
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to write a custom retriever class
](/v0.2/docs/how_to/custom_retriever)[
Next
How to debug your LLM apps
](/v0.2/docs/how_to/debugging)
* [`DynamicStructuredTool`](#dynamicstructuredtool)
* [`DynamicTool`](#dynamictool)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/document_loader_directory | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load data from a directory
How to load data from a directory
=================================
This covers how to load all documents in a directory.
The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
Example folder:
src/document_loaders/example_data/example/├── example.json├── example.jsonl├── example.txt└── example.csv
Example code:
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to write a custom document loader
](/v0.2/docs/how_to/document_loader_custom)[
Next
How to load PDF files
](/v0.2/docs/how_to/document_loader_pdf)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/security | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Security
On this page
Security
========
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
Best Practices[](#best-practices "Direct link to Best Practices")
------------------------------------------------------------------
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
Reporting a Vulnerability[](#reporting-a-vulnerability "Direct link to Reporting a Vulnerability")
---------------------------------------------------------------------------------------------------
Please report security vulnerabilities by email to [security@langchain.dev.](mailto:security@langchain.dev.) This will ensure the issue is promptly triaged and acted upon as needed.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Packages
](/v0.2/docs/versions/packages)
* [Best Practices](#best-practices)
* [Reporting a Vulnerability](#reporting-a-vulnerability)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/document_loader_csv | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load CSV data
On this page
How to load CSV data
====================
> A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.
Load CSV data with a single row per document.
Setup[](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install d3-dsv@2
yarn add d3-dsv@2
pnpm add d3-dsv@2
Usage, extracting all columns[](#usage-extracting-all-columns "Direct link to Usage, extracting all columns")
--------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence.", },]*/
Usage, extracting a single column[](#usage-extracting-a-single-column "Direct link to Usage, extracting a single column")
--------------------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence.", },]*/
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to debug your LLM apps
](/v0.2/docs/how_to/debugging)[
Next
How to write a custom document loader
](/v0.2/docs/how_to/document_loader_custom)
* [Setup](#setup)
* [Usage, extracting all columns](#usage-extracting-all-columns)
* [Usage, extracting a single column](#usage-extracting-a-single-column)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/example_selectors_length_based | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to select examples by length
On this page
How to select examples by length
================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Example selectors](/v0.2/docs/how_to/example_selectors)
This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { LengthBasedExampleSelector } from "@langchain/core/example_selectors";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples. const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples. console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example. const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */}
#### API Reference:
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [LengthBasedExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.LengthBasedExampleSelector.html) from `@langchain/core/example_selectors`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned a bit about using a length based example selector.
Next, check out this guide on how to use a [similarity based example selector](/v0.2/docs/how_to/example_selectors_similarity).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load JSON data
](/v0.2/docs/how_to/document_loaders_json)[
Next
How to select examples by similarity
](/v0.2/docs/how_to/example_selectors_similarity)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/custom_retriever | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to write a custom retriever class
On this page
How to write a custom retriever class
=====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
To create your own retriever, you need to extend the [`BaseRetriever`](https://v02.api.js.langchain.com/classes/langchain_core_retrievers.BaseRetriever.html) class and implement a `_getRelevantDocuments` method that takes a `string` as its first parameter (and an optional `runManager` for tracing). This method should return an array of `Document`s fetched from some source. This process can involve calls to a database, to the web using `fetch`, or any other source. Note the underscore before `_getRelevantDocuments()`. The base class wraps the non-prefixed version in order to automatically handle tracing of the original call.
Here's an example of a custom retriever that returns static documents:
import { BaseRetriever, type BaseRetrieverInput,} from "@langchain/core/retrievers";import type { CallbackManagerForRetrieverRun } from "@langchain/core/callbacks/manager";import { Document } from "@langchain/core/documents";export interface CustomRetrieverInput extends BaseRetrieverInput {}export class CustomRetriever extends BaseRetriever { lc_namespace = ["langchain", "retrievers"]; constructor(fields?: CustomRetrieverInput) { super(fields); } async _getRelevantDocuments( query: string, runManager?: CallbackManagerForRetrieverRun ): Promise<Document[]> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // const additionalDocs = await someOtherRunnable.invoke(params, runManager?.getChild()); return [ // ...additionalDocs, new Document({ pageContent: `Some document pertaining to ${query}`, metadata: {}, }), new Document({ pageContent: `Some other document pertaining to ${query}`, metadata: {}, }), ]; }}
Then, you can call `.invoke()` as follows:
const retriever = new CustomRetriever({});await retriever.invoke("LangChain docs");
[ Document { pageContent: 'Some document pertaining to LangChain docs', metadata: {} }, Document { pageContent: 'Some other document pertaining to LangChain docs', metadata: {} }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now seen an example of implementing your own custom retriever.
Next, check out the individual sections for deeper dives on specific retrievers, or the [broader tutorial on RAG](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do retrieval with contextual compression
](/v0.2/docs/how_to/contextual_compression)[
Next
How to create custom Tools
](/v0.2/docs/how_to/custom_tools)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/document_loaders_json | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load JSON data
On this page
How to load JSON data
=====================
> [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
> [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value.
The JSON loader uses [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target.
### No JSON pointer example[](#no-json-pointer-example "Direct link to No JSON pointer example")
The most simple way of using it is to specify no JSON pointer. The loader will load all strings it finds in the JSON object.
Example JSON file:
{ "texts": ["This is a sentence.", "This is another sentence."]}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/
### Using JSON pointer example[](#using-json-pointer-example "Direct link to Using JSON pointer example")
You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.
In this example, we want to only extract information from "from" and "surname" entries.
{ "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { pageContent: 'LinkedIn Job', metadata: { source: './src/json/example.json', line: 1 } }, Document { pageContent: 'LinkedIn Job2', metadata: { source: './src/json/example.json', line: 2 } }, Document { pageContent: 'bob', metadata: { source: './src/json/example.json', line: 3 } }]**/
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load PDF files
](/v0.2/docs/how_to/document_loader_pdf)[
Next
How to select examples by length
](/v0.2/docs/how_to/example_selectors_length_based)
* [No JSON pointer example](#no-json-pointer-example)
* [Using JSON pointer example](#using-json-pointer-example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/example_selectors_similarity | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to select examples by similarity
On this page
How to select examples by similarity
====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Example selectors](/v0.2/docs/how_to/example_selectors)
* [Vector stores](/v0.2/docs/concepts#vectorstores)
This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
The fields of the examples object will be used as parameters to format the `examplePrompt` passed to the `FewShotPromptTemplate`. Each example should therefore contain all required fields for the example prompt you are using.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate( "Input: {input}\nOutput: {output}");// Create a SemanticSimilarityExampleSelector that will be used to select the examples.const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 });// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"],});// Input is about the weather, so should select eg. the sunny/gloomy exampleconsole.log(await dynamicPrompt.format({ adjective: "rainy" }));/* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output:*/// Input is a measurement, so should select the tall/short exampleconsole.log(await dynamicPrompt.format({ adjective: "large" }));/* Give the antonym of every input Input: tall Output: short Input: large Output:*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
By default, each field in the examples object is concatenated together, embedded, and stored in the vectorstore for later similarity search against user queries.
If you only want to embed specific keys (e.g., you only want to search for examples that have a similar query to the one the user provides), you can pass an `inputKeys` array in the final `options` parameter.
Loading from an existing vectorstore[](#loading-from-an-existing-vectorstore "Direct link to Loading from an existing vectorstore")
------------------------------------------------------------------------------------------------------------------------------------
You can also use a pre-initialized vector store by passing an instance to the `SemanticSimilarityExampleSelector` constructor directly, as shown below. You can also add more examples via the `addExample` method:
// Ephemeral, in-memory vector store for demo purposesimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const embeddings = new OpenAIEmbeddings();const memoryVectorStore = new MemoryVectorStore(embeddings);const examples = [ { query: "healthy food", output: `galbi`, }, { query: "healthy food", output: `schnitzel`, }, { query: "foo", output: `bar`, },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore: memoryVectorStore, k: 2, // Only embed the "query" key of each example inputKeys: ["query"],});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question: {query}", inputVariables: ["query"],});const formattedValue = await dynamicPrompt.format({ query: "What is a healthy food?",});console.log(formattedValue);/*Answer the user's question, using the below examples as reference:<example> <user_input> healthy </user_input> <output> galbi </output></example><example> <user_input> healthy </user_input> <output> schnitzel </output></example>User question: What is a healthy food?*/const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is a healthy food?" });console.log(result);/* AIMessage { content: 'A healthy food can be galbi or schnitzel.', additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
Metadata filtering[](#metadata-filtering "Direct link to Metadata filtering")
------------------------------------------------------------------------------
When adding examples, each field is available as metadata in the produced document. If you would like further control over your search space, you can add extra fields to your examples and pass a `filter` parameter when initializing your selector:
// Ephemeral, in-memory vector store for demo purposesimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { Document } from "@langchain/core/documents";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const embeddings = new OpenAIEmbeddings();const memoryVectorStore = new MemoryVectorStore(embeddings);const examples = [ { query: "healthy food", output: `lettuce`, food_type: "vegetable", }, { query: "healthy food", output: `schnitzel`, food_type: "veal", }, { query: "foo", output: `bar`, food_type: "baz", },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore: memoryVectorStore, k: 2, // Only embed the "query" key of each example inputKeys: ["query"], // Filter type will depend on your specific vector store. // See the section of the docs for the specific vector store you are using. filter: (doc: Document) => doc.metadata.food_type === "vegetable",});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question:\n{query}", inputVariables: ["query"],});const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is exactly one type of healthy food?",});console.log(result);/* AIMessage { content: 'One type of healthy food is lettuce.', additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
Custom vectorstore retrievers[](#custom-vectorstore-retrievers "Direct link to Custom vectorstore retrievers")
---------------------------------------------------------------------------------------------------------------
You can also pass a vectorstore retriever instead of a vectorstore. One way this could be useful is if you want to use retrieval besides similarity search such as maximal marginal relevance:
/* eslint-disable @typescript-eslint/no-non-null-assertion */// Requires a vectorstore that supports maximal marginal relevance searchimport { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const pineconeVectorstore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });const pineconeMmrRetriever = pineconeVectorstore.asRetriever({ searchType: "mmr", k: 2,});const examples = [ { query: "healthy food", output: `lettuce`, food_type: "vegetable", }, { query: "healthy food", output: `schnitzel`, food_type: "veal", }, { query: "foo", output: `bar`, food_type: "baz", },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStoreRetriever: pineconeMmrRetriever, // Only embed the "query" key of each example inputKeys: ["query"],});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question:\n{query}", inputVariables: ["query"],});const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is exactly one type of healthy food?",});console.log(result);/* AIMessage { content: 'lettuce.', additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PineconeStore](https://v02.api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned a bit about using similarity in an example selector.
Next, check out this guide on how to use a [length-based example selector](/v0.2/docs/how_to/example_selectors_length_based).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to select examples by length
](/v0.2/docs/how_to/example_selectors_length_based)[
Next
How to use reference examples
](/v0.2/docs/how_to/extraction_examples)
* [Loading from an existing vectorstore](#loading-from-an-existing-vectorstore)
* [Metadata filtering](#metadata-filtering)
* [Custom vectorstore retrievers](#custom-vectorstore-retrievers)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/document_loader_pdf | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to load PDF files
On this page
How to load PDF files
=====================
> [Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This covers how to load `PDF` documents into the Document format that we use downstream.
By default, one document will be created for each page in the PDF file. You can change this behavior by setting the `splitPages` option to `false`.
Setup[](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install pdf-parse
yarn add pdf-parse
pnpm add pdf-parse
Usage, one document per page[](#usage-one-document-per-page "Direct link to Usage, one document per page")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";// Or, in web environments:// import { WebPDFLoader } from "langchain/document_loaders/web/pdf";// const blob = new Blob(); // e.g. from a file input// const loader = new WebPDFLoader(blob);const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();
Usage, one document per file[](#usage-one-document-per-file "Direct link to Usage, one document per file")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();
Usage, custom `pdfjs` build[](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build")
---------------------------------------------------------------------------------------------------
By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object.
In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build.
* npm
* Yarn
* pnpm
npm install pdfjs-dist
yarn add pdfjs-dist
pnpm add pdfjs-dist
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
Eliminating extra spaces[](#eliminating-extra-spaces "Direct link to Eliminating extra spaces")
------------------------------------------------------------------------------------------------
PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this:
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { parsedItemSeparator: "",});const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to load data from a directory
](/v0.2/docs/how_to/document_loader_directory)[
Next
How to load JSON data
](/v0.2/docs/how_to/document_loaders_json)
* [Setup](#setup)
* [Usage, one document per page](#usage-one-document-per-page)
* [Usage, one document per file](#usage-one-document-per-file)
* [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build)
* [Eliminating extra spaces](#eliminating-extra-spaces)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/extraction_parse | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do extraction without using function calling
On this page
How to do extraction without using function calling
===================================================
Prerequisites
This guide assumes familiarity with the following:
* [Extraction](/v0.2/docs/tutorials/extraction)
LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format without using function calling.
This approach relies on designing good prompts and then parsing the output of the LLMs to make them extract information well, though it lacks some of the guarantees provided by function calling or JSON mode.
Here, we’ll use Claude which is great at following instructions! See [here for more about Anthropic models](/v0.2/docs/integrations/chat/anthropic).
First, we’ll install the integration package:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic zod zod-to-json-schema
yarn add @langchain/anthropic zod zod-to-json-schema
pnpm add @langchain/anthropic zod zod-to-json-schema
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});
tip
All the same considerations for extraction quality apply for parsing approach.
This tutorial is meant to be simple, but generally should really include reference examples to squeeze out performance!
Using StructuredOutputParser[](#using-structuredoutputparser "Direct link to Using StructuredOutputParser")
------------------------------------------------------------------------------------------------------------
The following example uses the built-in [`StructuredOutputParser`](/v0.2/docs/how_to/output_parser_structured/) to parse the output of a chat model. We use the built-in prompt formatting instructions contained in the parser.
import { z } from "zod";import { StructuredOutputParser } from "langchain/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const parser = StructuredOutputParser.fromZodSchema(personSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user query. Wrap the output in `json` tags\n{format_instructions}", ], ["human", "{query}"],]);const partialedPrompt = await prompt.partial({ format_instructions: parser.getFormatInstructions(),});
Let’s take a look at what information is sent to the model
const query = "Anna is 23 years old and she is 6 feet tall";
const promptValue = await partialedPrompt.invoke({ query });console.log(promptValue.toChatMessages());
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Answer the user query. Wrap the output in `json` tags\n" + "You must format your output as a JSON value th"... 1444 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Answer the user query. Wrap the output in `json` tags\n" + "You must format your output as a JSON value th"... 1444 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Anna is 23 years old and she is 6 feet tall", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Anna is 23 years old and she is 6 feet tall", name: undefined, additional_kwargs: {} }]
const chain = partialedPrompt.pipe(model).pipe(parser);await chain.invoke({ query });
{ name: "Anna", hair_color: "", height_in_meters: "1.83" }
Custom Parsing[](#custom-parsing "Direct link to Custom Parsing")
------------------------------------------------------------------
You can also create a custom prompt and parser with `LangChain` and `LCEL`.
You can use a raw function to parse the output from the model.
In the below example, we’ll pass the schema into the prompt as JSON schema. For convenience, we’ll declare our schema with Zod, then use the [`zod-to-json-schema`](https://github.com/StefanTerdell/zod-to-json-schema) utility to convert it to JSON schema.
import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),});const SYSTEM_PROMPT_TEMPLATE = [ "Answer the user's query. You must return your answer as JSON that matches the given schema:", "```json\n{schema}\n```.", "Make sure to wrap the answer in ```json and ``` tags. Conform to the given schema exactly.",].join("\n");const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], ["human", "{query}"],]);const extractJsonFromOutput = (message) => { const text = message.content; // Define the regular expression pattern to match JSON blocks const pattern = /```json\s*((.|\n)*?)\s*```/gs; // Find all non-overlapping matches of the pattern in the string const matches = pattern.exec(text); if (matches && matches[1]) { try { return JSON.parse(matches[1].trim()); } catch (error) { throw new Error(`Failed to parse: ${matches[1]}`); } } else { throw new Error(`No JSON found in: ${message}`); }};
const query = "Anna is 23 years old and she is 6 feet tall";const promptValue = await prompt.invoke({ schema: zodToJsonSchema(peopleSchema), query,});promptValue.toString();
"System: Answer the user's query. You must return your answer as JSON that matches the given schema:\n"... 170 more characters
const chain = prompt.pipe(model).pipe(extractJsonFromOutput);await chain.invoke({ schema: zodToJsonSchema(peopleSchema), query,});
{ name: "Anna", age: 23, height: { feet: 6, inches: 0 } }
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to perform extraction without using tool calling.
Next, check out some of the other guides in this section, such as [some tips on how to improve extraction quality with examples](/v0.2/docs/how_to/extraction_examples).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle long text
](/v0.2/docs/how_to/extraction_long_text)[
Next
Fallbacks
](/v0.2/docs/how_to/fallbacks)
* [Using StructuredOutputParser](#using-structuredoutputparser)
* [Custom Parsing](#custom-parsing)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/extraction_long_text | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle long text
On this page
How to handle long text
=======================
Prerequisites
This guide assumes familiarity with the following:
* [Extraction](/v0.2/docs/tutorials/extraction)
When working with files, like PDFs, you’re likely to encounter text that exceeds your language model’s context window. To process this text, consider these strategies:
1. **Change LLM** Choose a different LLM that supports a larger context window.
2. **Brute Force** Chunk the document, and extract content from each chunk.
3. **RAG** Chunk the document, index the chunks, and only extract content from a subset of chunks that look “relevant”.
Keep in mind that these strategies have different trade offs and the best strategy likely depends on the application that you’re designing!
Set up[](#set-up "Direct link to Set up")
------------------------------------------
First, let’s install some required dependencies:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai zod cheerio
yarn add @langchain/openai zod cheerio
pnpm add @langchain/openai zod cheerio
Next, we need some example data! Let’s download an article about [cars from Wikipedia](https://en.wikipedia.org/wiki/Car) and load it as a LangChain `Document`.
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";// Only required in a Deno notebook environment to load the peer dep.import "cheerio";const loader = new CheerioWebBaseLoader("https://en.wikipedia.org/wiki/Car");const docs = await loader.load();docs[0].pageContent.length;
97336
Define the schema[](#define-the-schema "Direct link to Define the schema")
---------------------------------------------------------------------------
Here, we’ll define schema to extract key developments from the text.
import { z } from "zod";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const keyDevelopmentSchema = z .object({ year: z .number() .describe("The year when there was an important historic development."), description: z .string() .describe("What happened in this year? What was the development?"), evidence: z .string() .describe( "Repeat verbatim the sentence(s) from which the year and description information were extracted" ), }) .describe("Information about a development in the history of cars.");const extractionDataSchema = z .object({ key_developments: z.array(keyDevelopmentSchema), }) .describe( "Extracted information about key developments in the history of cars" );const SYSTEM_PROMPT_TEMPLATE = [ "You are an expert at identifying key historic development in text.", "Only extract important historic developments. Extract nothing if no important information can be found in the text.",].join("\n");// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // Keep on reading through this use case to see how to use examples to improve performance // MessagesPlaceholder('examples'), ["human", "{text}"],]);// We will be using tool calling mode, which// requires a tool calling capable model.const llm = new ChatOpenAI({ model: "gpt-4-0125-preview", temperature: 0,});const extractionChain = prompt.pipe( llm.withStructuredOutput(extractionDataSchema));
Brute force approach[](#brute-force-approach "Direct link to Brute force approach")
------------------------------------------------------------------------------------
Split the documents into chunks such that each chunk fits into the context window of the LLMs.
import { TokenTextSplitter } from "langchain/text_splitter";const textSplitter = new TokenTextSplitter({ chunkSize: 2000, chunkOverlap: 20,});// Note that this method takes an array of docsconst splitDocs = await textSplitter.splitDocuments(docs);
Use the `.batch` method present on all runnables to run the extraction in **parallel** across each chunk!
tip
You can often use `.batch()` to parallelize the extractions!
If your model is exposed via an API, this will likely speed up your extraction flow.
// Limit just to the first 3 chunks// so the code can be re-run quicklyconst firstFewTexts = splitDocs.slice(0, 3).map((doc) => doc.pageContent);const extractionChainParams = firstFewTexts.map((text) => { return { text };});const results = await extractionChain.batch(extractionChainParams, { maxConcurrency: 5,});
### Merge results[](#merge-results "Direct link to Merge results")
After extracting data from across the chunks, we’ll want to merge the extractions together.
const keyDevelopments = results.flatMap((result) => result.key_developments);keyDevelopments.slice(0, 20);
[ { year: 0, description: "", evidence: "" }, { year: 1769, description: "French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle.", evidence: "French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769." }, { year: 1808, description: "French-born Swiss inventor François Isaac de Rivaz designed and constructed the first internal combu"... 25 more characters, evidence: "French-born Swiss inventor François Isaac de Rivaz designed and constructed the first internal combu"... 33 more characters }, { year: 1886, description: "German inventor Carl Benz patented his Benz Patent-Motorwagen, inventing the modern car—a practical,"... 40 more characters, evidence: "The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when German"... 56 more characters }, { year: 1908, description: "The 1908 Model T, an American car manufactured by the Ford Motor Company, became one of the first ca"... 28 more characters, evidence: "One of the first cars affordable by the masses was the 1908 Model T, an American car manufactured by"... 24 more characters }]
RAG based approach[](#rag-based-approach "Direct link to RAG based approach")
------------------------------------------------------------------------------
Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the the most relevant chunks.
caution
It can be difficult to identify which chunks are relevant.
For example, in the `car` article we’re using here, most of the article contains key development information. So by using **RAG**, we’ll likely be throwing out a lot of relevant information.
We suggest experimenting with your use case and determining whether this approach works or not.
Here’s a simple example that relies on an in-memory demo `MemoryVectorStore` vectorstore.
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";// Only load the first 10 docs for speed in this demo use-caseconst vectorstore = await MemoryVectorStore.fromDocuments( splitDocs.slice(0, 10), new OpenAIEmbeddings());// Only extract from top documentconst retriever = vectorstore.asRetriever({ k: 1 });
In this case the RAG extractor is only looking at the top document.
import { RunnableSequence } from "@langchain/core/runnables";const ragExtractor = RunnableSequence.from([ { text: retriever.pipe((docs) => docs[0].pageContent), }, extractionChain,]);
const results = await ragExtractor.invoke( "Key developments associated with cars");
results.key_developments;
[ { year: 2020, description: "The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 million km (1."... 33 more characters, evidence: "The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 millionkm (1.2"... 31 more characters }, { year: 2030, description: "All fossil fuel vehicles will be banned in Amsterdam from 2030.", evidence: "all fossil fuel vehicles will be banned in Amsterdam from 2030." }, { year: 2020, description: "In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year.", evidence: "In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year." }]
Common issues[](#common-issues "Direct link to Common issues")
---------------------------------------------------------------
Different methods have their own pros and cons related to cost, speed, and accuracy.
Watch out for these issues:
* Chunking content means that the LLM can fail to extract information if the information is spread across multiple chunks.
* Large chunk overlap may cause the same information to be extracted twice, so be prepared to de-duplicate!
* LLMs can make up data. If looking for a single fact across a large text and using a brute force approach, you may end up getting more made up data.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to improve extraction quality using few-shot examples.
Next, check out some of the other guides in this section, such as [some tips on how to improve extraction quality with examples](/v0.2/docs/how_to/extraction_examples).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use reference examples
](/v0.2/docs/how_to/extraction_examples)[
Next
How to do extraction without using function calling
](/v0.2/docs/how_to/extraction_parse)
* [Set up](#set-up)
* [Define the schema](#define-the-schema)
* [Brute force approach](#brute-force-approach)
* [Merge results](#merge-results)
* [RAG based approach](#rag-based-approach)
* [Common issues](#common-issues)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/fallbacks | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Fallbacks
On this page
Fallbacks
=========
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
When working with language models, you may encounter issues from the underlying APIs, e.g. rate limits or downtime. As you move your LLM applications into production it becomes more and more important to have contingencies for errors. That's why we've introduced the concept of fallbacks.
Crucially, fallbacks can be applied not only on the LLM level but on the whole runnable level. This is important because often times different models require different prompts. So if your call to OpenAI fails, you don't just want to send the same prompt to Anthropic - you probably want want to use e.g. a different prompt template.
Handling LLM API errors[](#handling-llm-api-errors "Direct link to Handling LLM API errors")
---------------------------------------------------------------------------------------------
This is maybe the most common use case for fallbacks. A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit a rate limit, or any number of things.
**IMPORTANT:** By default, many of LangChain's LLM wrappers catch errors and retry. You will most likely want to turn those off when working with fallbacks. Otherwise the first wrapper will keep on retrying rather than failing.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic @langchain/openai
yarn add @langchain/anthropic @langchain/openai
pnpm add @langchain/anthropic @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { ChatAnthropic } from "@langchain/anthropic";// Use a fake model name that will always throw an errorconst fakeOpenAIModel = new ChatOpenAI({ model: "potato!", maxRetries: 0,});const anthropicModel = new ChatAnthropic({});const modelWithFallback = fakeOpenAIModel.withFallbacks({ fallbacks: [anthropicModel],});const result = await modelWithFallback.invoke("What is your name?");console.log(result);/* AIMessage { content: ' My name is Claude. I was created by Anthropic.', additional_kwargs: {} }*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Fallbacks for RunnableSequences[](#fallbacks-for-runnablesequences "Direct link to Fallbacks for RunnableSequences")
---------------------------------------------------------------------------------------------------------------------
We can also create fallbacks for sequences, that are sequences themselves. Here we do that with two different models: ChatOpenAI and then normal OpenAI (which does not use a chat model). Because OpenAI is NOT a chat model, you likely want a different prompt.
import { ChatOpenAI, OpenAI } from "@langchain/openai";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate, PromptTemplate } from "@langchain/core/prompts";const chatPrompt = ChatPromptTemplate.fromMessages<{ animal: string }>([ [ "system", "You're a nice assistant who always includes a compliment in your response", ], ["human", "Why did the {animal} cross the road?"],]);// Use a fake model name that will always throw an errorconst fakeOpenAIChatModel = new ChatOpenAI({ model: "potato!", maxRetries: 0,});const prompt = PromptTemplate.fromTemplate(`Instructions: You should always include a compliment in your response.Question: Why did the {animal} cross the road?Answer:`);const openAILLM = new OpenAI({});const outputParser = new StringOutputParser();const badChain = chatPrompt.pipe(fakeOpenAIChatModel).pipe(outputParser);const goodChain = prompt.pipe(openAILLM).pipe(outputParser);const chain = badChain.withFallbacks({ fallbacks: [goodChain],});const result = await chain.invoke({ animal: "dragon",});console.log(result);/* I don't know, but I'm sure it was an impressive sight. You must have a great imagination to come up with such an interesting question!*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Handling long inputs[](#handling-long-inputs "Direct link to Handling long inputs")
------------------------------------------------------------------------------------
One of the big limiting factors of LLMs in their context window. Sometimes you can count and track the length of prompts before sending them to an LLM, but in situations where that is hard/complicated you can fallback to a model with longer context length.
import { ChatOpenAI } from "@langchain/openai";// Use a model with a shorter context windowconst shorterLlm = new ChatOpenAI({ model: "gpt-3.5-turbo", maxRetries: 0,});const longerLlm = new ChatOpenAI({ model: "gpt-3.5-turbo-16k",});const modelWithFallback = shorterLlm.withFallbacks({ fallbacks: [longerLlm],});const input = `What is the next number: ${"one, two, ".repeat(3000)}`;try { await shorterLlm.invoke(input);} catch (e) { // Length error console.log(e);}const result = await modelWithFallback.invoke(input);console.log(result);/* AIMessage { content: 'The next number is one.', name: undefined, additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
Fallback to a better model[](#fallback-to-a-better-model "Direct link to Fallback to a better model")
------------------------------------------------------------------------------------------------------
Often times we ask models to output format in a specific format (like JSON). Models like GPT-3.5 can do this okay, but sometimes struggle. This naturally points to fallbacks - we can try with a faster and cheaper model, but then if parsing fails we can use GPT-4.
import { z } from "zod";import { OpenAI, ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { StructuredOutputParser } from "@langchain/core/output_parsers";const prompt = PromptTemplate.fromTemplate( `Return a JSON object containing the following value wrapped in an "input" key. Do not return anything else:\n{input}`);const badModel = new OpenAI({ maxRetries: 0, model: "gpt-3.5-turbo-instruct",});const normalModel = new ChatOpenAI({ model: "gpt-4",});const outputParser = StructuredOutputParser.fromZodSchema( z.object({ input: z.string(), }));const badChain = prompt.pipe(badModel).pipe(outputParser);const goodChain = prompt.pipe(normalModel).pipe(outputParser);try { const result = await badChain.invoke({ input: "testing0", });} catch (e) { console.log(e); /* OutputParserException [Error]: Failed to parse. Text: " { "name" : " Testing0 ", "lastname" : " testing ", "fullname" : " testing ", "role" : " test ", "telephone" : "+1-555-555-555 ", "email" : " testing@gmail.com ", "role" : " test ", "text" : " testing0 is different than testing ", "role" : " test ", "immediate_affected_version" : " 0.0.1 ", "immediate_version" : " 1.0.0 ", "leading_version" : " 1.0.0 ", "version" : " 1.0.0 ", "finger prick" : " no ", "finger prick" : " s ", "text" : " testing0 is different than testing ", "role" : " test ", "immediate_affected_version" : " 0.0.1 ", "immediate_version" : " 1.0.0 ", "leading_version" : " 1.0.0 ", "version" : " 1.0.0 ", "finger prick" :". Error: SyntaxError: Unexpected end of JSON input*/}const chain = badChain.withFallbacks({ fallbacks: [goodChain],});const result = await chain.invoke({ input: "testing",});console.log(result);/* { input: 'testing' }*/
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StructuredOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html) from `@langchain/core/output_parsers`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do extraction without using function calling
](/v0.2/docs/how_to/extraction_parse)[
Next
Few Shot Prompt Templates
](/v0.2/docs/how_to/few_shot)
* [Handling LLM API errors](#handling-llm-api-errors)
* [Fallbacks for RunnableSequences](#fallbacks-for-runnablesequences)
* [Handling long inputs](#handling-long-inputs)
* [Fallback to a better model](#fallback-to-a-better-model)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/functions | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to run custom functions
On this page
How to run custom functions
===========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
You can use arbitrary functions as [Runnables](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html). This is useful for formatting or when you need functionality not provided by other LangChain components, and custom functions used as Runnables are called [`RunnableLambdas`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableLambda.html).
Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single dict input and unpacks it into multiple argument.
This guide will cover:
* How to explicitly create a runnable from a custom function using the `RunnableLambda` constructor
* Coercion of custom functions into runnables when used in chains
* How to accept and use run metadata in your custom function
* How to stream with custom functions by having them return generators
Using the constructor[](#using-the-constructor "Direct link to Using the constructor")
---------------------------------------------------------------------------------------
Below, we explicitly wrap our custom logic using a `RunnableLambda` method:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableLambda } from "@langchain/core/runnables";import { ChatOpenAI } from "@langchain/openai";const lengthFunction = (input: { foo: string }): { length: string } => { return { length: input.foo.length.toString(), };};const model = new ChatOpenAI({ model: "gpt-4o" });const prompt = ChatPromptTemplate.fromTemplate("What is {length} squared?");const chain = RunnableLambda.from(lengthFunction) .pipe(prompt) .pipe(model) .pipe(new StringOutputParser());await chain.invoke({ foo: "bar" });
"3 squared is \\(3^2\\), which means multiplying 3 by itself. \n" + "\n" + "\\[3^2 = 3 \\times 3 = 9\\]\n" + "\n" + "So, 3 squared"... 6 more characters
Automatic coercion in chains[](#automatic-coercion-in-chains "Direct link to Automatic coercion in chains")
------------------------------------------------------------------------------------------------------------
When using custom functions in chains with [`RunnableSequence.from`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html#from) static method, you can omit the explicit `RunnableLambda` creation and rely on coercion.
Here’s a simple example with a function that takes the output from the model and returns the first five letters of it:
import { RunnableSequence } from "@langchain/core/runnables";const prompt = ChatPromptTemplate.fromTemplate( "Tell me a short story about {topic}");const model = new ChatOpenAI({ model: "gpt-4o" });const chainWithCoercedFunction = RunnableSequence.from([ prompt, model, (input) => input.content.slice(0, 5),]);await chainWithCoercedFunction.invoke({ topic: "bears" });
"Once "
Note that we didn’t need to wrap the custom function `(input) => input.content.slice(0, 5)` in a `RunnableLambda` method. The custom function is **coerced** into a runnable. See [this section](/v0.2/docs/how_to/sequence/#coercion) for more information.
Passing run metadata[](#passing-run-metadata "Direct link to Passing run metadata")
------------------------------------------------------------------------------------
Runnable lambdas can optionally accept a [RunnableConfig](https://v02.api.js.langchain.com/interfaces/langchain_core_runnables.RunnableConfig.html) parameter, which they can use to pass callbacks, tags, and other configuration information to nested runs.
import { type RunnableConfig } from "@langchain/core/runnables";const echo = (text: string, config: RunnableConfig) => { const prompt = ChatPromptTemplate.fromTemplate( "Reverse the following text: {text}" ); const model = new ChatOpenAI({ model: "gpt-4o" }); const chain = prompt.pipe(model).pipe(new StringOutputParser()); return chain.invoke({ text }, config);};const output = await RunnableLambda.from(echo).invoke("foo", { tags: ["my-tag"], callbacks: [ { handleLLMEnd: (output) => console.log(output), }, ],});
{ generations: [ [ { text: "oof", message: AIMessage { lc_serializable: true, lc_kwargs: [Object], lc_namespace: [Array], content: "oof", name: undefined, additional_kwargs: [Object], response_metadata: [Object], tool_calls: [], invalid_tool_calls: [] }, generationInfo: { finish_reason: "stop" } } ] ], llmOutput: { tokenUsage: { completionTokens: 2, promptTokens: 13, totalTokens: 15 } }}
Streaming
=========
You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain.
The signature of these generators should be `AsyncGenerator<Input> -> AsyncGenerator<Output>`.
These are useful for: - implementing a custom output parser - modifying the output of a previous step, while preserving streaming capabilities
Here’s an example of a custom output parser for comma-separated lists. First, we create a chain that generates such a list as text:
const prompt = ChatPromptTemplate.fromTemplate( "Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers");const strChain = prompt.pipe(model).pipe(new StringOutputParser());const stream = await strChain.stream({ animal: "bear" });for await (const chunk of stream) { console.log(chunk);}
Lion, wolf, tiger, cougar, leopard
Next, we define a custom function that will aggregate the currently streamed output and yield it when the model generates the next comma in the list:
// This is a custom parser that splits an iterator of llm tokens// into a list of strings separated by commasasync function* splitIntoList(input) { // hold partial input until we get a comma let buffer = ""; for await (const chunk of input) { // add current chunk to buffer buffer += chunk; // while there are commas in the buffer while (buffer.includes(",")) { // split buffer on comma const commaIndex = buffer.indexOf(","); // yield everything before the comma yield [buffer.slice(0, commaIndex).trim()]; // save the rest for the next iteration buffer = buffer.slice(commaIndex + 1); } } // yield the last chunk yield [buffer.trim()];}const listChain = strChain.pipe(splitIntoList);const stream = await listChain.stream({ animal: "bear" });for await (const chunk of stream) { console.log(chunk);}
[ "wolf" ][ "lion" ][ "tiger" ][ "cougar" ][ "cheetah" ]
Invoking it gives a full array of values:
await listChain.invoke({ animal: "bear" });
[ "lion", "tiger", "wolf", "cougar", "jaguar" ]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you’ve learned a few different ways to use custom logic within your chains, and how to implement streaming.
To learn more, see the other how-to guides on runnables in this section.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Few Shot Prompt Templates
](/v0.2/docs/how_to/few_shot)[
Next
How to construct knowledge graphs
](/v0.2/docs/how_to/graph_constructing)
* [Using the constructor](#using-the-constructor)
* [Automatic coercion in chains](#automatic-coercion-in-chains)
* [Passing run metadata](#passing-run-metadata)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/few_shot | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Few Shot Prompt Templates
On this page
Few Shot Prompt Templates
=========================
Few shot prompting is a prompting technique which provides the Large Language Model (LLM) with a list of examples, and then asks the LLM to generate some text following the lead of the examples provided.
An example of this is the following:
Say you want your LLM to respond in a specific format. You can few shot prompt the LLM with a list of question answer pairs so it knows what format to respond in.
Respond to the users question in the with the following format:Question: What is your name?Answer: My name is John.Question: What is your age?Answer: I am 25 years old.Question: What is your favorite color?Answer:
Here we left the last `Answer:` undefined so the LLM can fill it in. The LLM will then generate the following:
Answer: I don't have a favorite color; I don't have preferences.
### Use Case[](#use-case "Direct link to Use Case")
In the following example we're few shotting the LLM to rephrase questions into more general queries.
We provide two sets of examples with specific questions, and rephrased general questions. The `FewShotChatMessagePromptTemplate` will use our examples and when `.format` is called, we'll see those examples formatted into a string we can pass to the LLM.
import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts";
const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input}AI: {output}`);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables});
const formattedPrompt = await fewShotPrompt.format({});console.log(formattedPrompt);
[ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} }]
Then, if we use this with another question, the LLM will rephrase the question how we want.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({});const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const examplePrompt = ChatPromptTemplate.fromTemplate(`Human: {input}AI: {output}`);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ prefix: "Rephrase the users query to be more general, using the following examples", suffix: "Human: {input}", examplePrompt, examples, inputVariables: ["input"],});const formattedPrompt = await fewShotPrompt.format({ input: "What's France's main city?",});const response = await model.invoke(formattedPrompt);console.log(response);
AIMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'What is the capital of France?', additional_kwargs: { function_call: undefined }}
### Few Shotting With Functions[](#few-shotting-with-functions "Direct link to Few Shotting With Functions")
You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.
const getCurrentDate = () => { return new Date().toISOString();};const prompt = new FewShotChatMessagePromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z
### Few Shot vs Chat Few Shot[](#few-shot-vs-chat-few-shot "Direct link to Few Shot vs Chat Few Shot")
The chat and non chat few shot prompt templates act in a similar way. The below example will demonstrate using chat and non chat, and the differences with their outputs.
import { FewShotPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts";
const examples = [ { input: "Could the members of The Police perform lawful arrests?", output: "what can the members of The Police do?", }, { input: "Jan Sindel's was born in what country?", output: "what is Jan Sindel's personal history?", },];const prompt = `Human: {input}AI: {output}`;const examplePromptTemplate = PromptTemplate.fromTemplate(prompt);const exampleChatPromptTemplate = ChatPromptTemplate.fromTemplate(prompt);const chatFewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt: exampleChatPromptTemplate, examples, inputVariables: [], // no input variables});const fewShotPrompt = new FewShotPromptTemplate({ examplePrompt: examplePromptTemplate, examples, inputVariables: [], // no input variables});
console.log("Chat Few Shot: ", await chatFewShotPrompt.formatMessages({}));/**Chat Few Shot: [ HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: 'Human: Could the members of The Police perform lawful arrests?\n' + 'AI: what can the members of The Police do?', additional_kwargs: {} }, HumanMessage { lc_namespace: [ 'langchain', 'schema' ], content: "Human: Jan Sindel's was born in what country?\n" + "AI: what is Jan Sindel's personal history?", additional_kwargs: {} }] */
console.log("Few Shot: ", await fewShotPrompt.formatPromptValue({}));/**Few Shot:Human: Could the members of The Police perform lawful arrests?AI: what can the members of The Police do?Human: Jan Sindel's was born in what country?AI: what is Jan Sindel's personal history? */
Here we can see the main distinctions between `FewShotChatMessagePromptTemplate` and `FewShotPromptTemplate`: input and output values.
`FewShotChatMessagePromptTemplate` works by taking in a list of `ChatPromptTemplate` for examples, and its output is a list of instances of `BaseMessage`.
On the other hand, `FewShotPromptTemplate` works by taking in a `PromptTemplate` for examples, and its output is a string.
With Non Chat Models[](#with-non-chat-models "Direct link to With Non Chat Models")
------------------------------------------------------------------------------------
LangChain also provides a class for few shot prompt formatting for non chat models: `FewShotPromptTemplate`. The API is largely the same, but the output is formatted differently (chat messages vs strings).
### Partials With Functions[](#partials-with-functions "Direct link to Partials With Functions")
import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts";
const examplePrompt = PromptTemplate.fromTemplate("{foo}{bar}");const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", examplePrompt, inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"),});const formatted = await partialPrompt.format({ bar: "baz" });console.log(formatted);
boobaz\n
### With Functions and Example Selector[](#with-functions-and-example-selector "Direct link to With Functions and Example Selector")
import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "langchain/prompts";
const examplePrompt = PromptTemplate.fromTemplate("An example about {x}");const exampleSelector = await LengthBasedExampleSelector.fromExamples( [{ x: "foo" }, { x: "bar" }], { examplePrompt, maxLength: 200 });const prompt = new FewShotPromptTemplate({ prefix: "{foo}{bar}", exampleSelector, examplePrompt, inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: () => Promise.resolve("boo"),});const formatted = await partialPrompt.format({ bar: "baz" });console.log(formatted);
boobazAn example about fooAn example about bar
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Fallbacks
](/v0.2/docs/how_to/fallbacks)[
Next
How to run custom functions
](/v0.2/docs/how_to/functions)
* [Use Case](#use-case)
* [Few Shotting With Functions](#few-shotting-with-functions)
* [Few Shot vs Chat Few Shot](#few-shot-vs-chat-few-shot)
* [With Non Chat Models](#with-non-chat-models)
* [Partials With Functions](#partials-with-functions)
* [With Functions and Example Selector](#with-functions-and-example-selector)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/graph_constructing | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to construct knowledge graphs
On this page
How to construct knowledge graphs
=================================
In this guide we’ll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructed graph can then be used as knowledge base in a RAG application. At a high-level, the steps of constructing a knowledge are from text are:
1. Extracting structured information from text: Model is used to extract structured graph information from text.
2. Storing into graph database: Storing the extracted structured graph information into a graph database enables downstream RAG applications
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver zod
yarn add langchain @langchain/community @langchain/openai neo4j-driver zod
pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });
LLM Graph Transformer[](#llm-graph-transformer "Direct link to LLM Graph Transformer")
---------------------------------------------------------------------------------------
Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The LLMGraphTransformer converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data.
import { ChatOpenAI } from "@langchain/openai";import { LLMGraphTransformer } from "@langchain/community/experimental/graph_transformers/llm";const model = new ChatOpenAI({ temperature: 0, model: "gpt-4-turbo-preview",});const llmGraphTransformer = new LLMGraphTransformer({ llm: model,});
Now we can pass in example text and examine the results.
import { Document } from "@langchain/core/documents";let text = `Marie Curie, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.She was, in 1906, the first woman to become a professor at the University of Paris.`;const result = await llmGraphTransformer.convertToGraphDocuments([ new Document({ pageContent: text }),]);console.log(`Nodes: ${result[0].nodes.length}`);console.log(`Relationships:${result[0].relationships.length}`);
Nodes: 8Relationships:7
Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution. Examine the following image to better grasp the structure of the generated knowledge graph.
![graph_construction1.png](/v0.2/assets/images/graph_construction1-2b4d31978d58696d5a6a52ad92ae088f.png)
Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements.
const llmGraphTransformerFiltered = new LLMGraphTransformer({ llm: model, allowedNodes: ["PERSON", "COUNTRY", "ORGANIZATION"], allowedRelationships: ["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], strictMode: false,});const result_filtered = await llmGraphTransformerFiltered.convertToGraphDocuments([ new Document({ pageContent: text }), ]);console.log(`Nodes: ${result_filtered[0].nodes.length}`);console.log(`Relationships:${result_filtered[0].relationships.length}`);
Nodes: 6Relationships:4
For a better understanding of the generated graph, we can again visualize it.
![graph_construction1.png](/v0.2/assets/images/graph_construction2-8b43506ae0fb3a006eaa4ba83fea8af5.png)
Storing to graph database[](#storing-to-graph-database "Direct link to Storing to graph database")
---------------------------------------------------------------------------------------------------
The generated graph documents can be stored to a graph database using the `addGraphDocuments` method.
await graph.addGraphDocuments(result_filtered);
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to run custom functions
](/v0.2/docs/how_to/functions)[
Next
How to map values to a database
](/v0.2/docs/how_to/graph_mapping)
* [Setup](#setup)
* [LLM Graph Transformer](#llm-graph-transformer)
* [Storing to graph database](#storing-to-graph-database)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/extraction_examples | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use reference examples
On this page
How to use reference examples
=============================
Prerequisites
This guide assumes familiarity with the following:
* [Extraction](/v0.2/docs/tutorials/extraction)
The quality of extraction can often be improved by providing reference examples to the LLM.
tip
While this tutorial focuses how to use examples with a tool calling model, this technique is generally applicable, and will work also with JSON more or prompt based techniques.
We’ll use OpenAI’s GPT-4 this time for their robust support for `ToolMessages`:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai zod uuid
yarn add @langchain/openai zod uuid
pnpm add @langchain/openai zod uuid
Let’s define a prompt:
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ new MessagesPlaceholder("examples"), // ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ ["human", "{text}"],]);
Test out the template:
import { HumanMessage } from "@langchain/core/messages";const promptValue = await prompt.invoke({ text: "this is some text", examples: [new HumanMessage("testing 1 2 3")],});promptValue.toChatMessages();
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "testing 1 2 3", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "testing 1 2 3", name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "this is some text", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "this is some text", name: undefined, additional_kwargs: {} }]
Define the schema[](#define-the-schema "Direct link to Define the schema")
---------------------------------------------------------------------------
Let’s re-use the people schema from the quickstart.
import { z } from "zod";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),});
Define reference examples[](#define-reference-examples "Direct link to Define reference examples")
---------------------------------------------------------------------------------------------------
Examples can be defined as a list of input-output pairs.
Each example contains an example `input` text and an example `output` showing what should be extracted from the text.
info
The below example is a bit more advanced - the format of the example needs to match the API used (e.g., tool calling or JSON mode etc.).
Here, the formatted examples will match the format expected for the OpenAI tool calling API since that’s what we’re using.
To provide reference examples to the model, we will mock out a fake chat history containing successful usages of the given tool. Because the model can choose to call multiple tools at once (or the same tool multiple times), the example’s outputs are an array:
import { AIMessage, type BaseMessage, HumanMessage, ToolMessage,} from "@langchain/core/messages";import { v4 as uuid } from "uuid";type OpenAIToolCall = { id: string; type: "function"; function: { name: string; arguments: string; };};type Example = { input: string; toolCallOutputs: Record<string, any>[];};/** * This function converts an example into a list of messages that can be fed into an LLM. * * This code serves as an adapter that transforms our example into a list of messages * that can be processed by a chat model. * * The list of messages for each example includes: * * 1) HumanMessage: This contains the content from which information should be extracted. * 2) AIMessage: This contains the information extracted by the model. * 3) ToolMessage: This provides confirmation to the model that the tool was requested correctly. * * The inclusion of ToolMessage is necessary because some chat models are highly optimized for agents, * making them less suitable for an extraction use case. */function toolExampleToMessages(example: Example): BaseMessage[] { const openAIToolCalls: OpenAIToolCall[] = example.toolCallOutputs.map( (output) => { return { id: uuid(), type: "function", function: { // The name of the function right now corresponds // to the passed name. name: "extract", arguments: JSON.stringify(output), }, }; } ); const messages: BaseMessage[] = [ new HumanMessage(example.input), new AIMessage({ content: "", additional_kwargs: { tool_calls: openAIToolCalls }, }), ]; const toolMessages = openAIToolCalls.map((toolCall, i) => { // Return the mocked successful result for a given tool call. return new ToolMessage({ content: "You have correctly called this tool.", tool_call_id: toolCall.id, }); }); return messages.concat(toolMessages);}
Next let’s define our examples and then convert them into message format.
const examples: Example[] = [ { input: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", toolCallOutputs: [{}], }, { input: "Fiona traveled far from France to Spain.", toolCallOutputs: [ { name: "Fiona", }, ], },];const exampleMessages = [];for (const example of examples) { exampleMessages.push(...toolExampleToMessages(example));}
6
Let’s test out the prompt
const promptValue = await prompt.invoke({ text: "this is some text", examples: exampleMessages,});promptValue.toChatMessages();
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { tool_calls: [ { id: "8fa4d00d-801f-470e-8737-51ee9dc82259", type: "function", function: [Object] } ] } }, ToolMessage { lc_serializable: true, lc_kwargs: { content: "You have correctly called this tool.", tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You have correctly called this tool.", name: undefined, additional_kwargs: {}, tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259" }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Fiona traveled far from France to Spain.", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Fiona traveled far from France to Spain.", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { tool_calls: [ { id: "14ad6217-fcbd-47c7-9006-82f612e36c66", type: "function", function: [Object] } ] } }, ToolMessage { lc_serializable: true, lc_kwargs: { content: "You have correctly called this tool.", tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You have correctly called this tool.", name: undefined, additional_kwargs: {}, tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66" }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "this is some text", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "this is some text", name: undefined, additional_kwargs: {} }]
Create an extractor[](#create-an-extractor "Direct link to Create an extractor")
---------------------------------------------------------------------------------
Here, we’ll create an extractor using **gpt-4**.
import { ChatOpenAI } from "@langchain/openai";// We will be using tool calling mode, which// requires a tool calling capable model.const llm = new ChatOpenAI({ // Consider benchmarking with the best model you can to get // a sense of the best possible quality. model: "gpt-4-0125-preview", temperature: 0,});// For function/tool calling, we can also supply an name for the schema// to give the LLM additional context about what it's extracting.const extractionRunnable = prompt.pipe( llm.withStructuredOutput(peopleSchema, { name: "people" }));
Without examples 😿[](#without-examples "Direct link to Without examples 😿")
------------------------------------------------------------------------------
Notice that even though we’re using `gpt-4`, it’s unreliable with a **very simple** test case!
We run it 5 times below to emphasize this:
const text = "The solar system is large, but earth has only 1 moon.";for (let i = 0; i < 5; i++) { const result = await extractionRunnable.invoke({ text, examples: [], }); console.log(result);}
{ people: [ { name: "earth", hair_color: "grey", height_in_meters: "1" } ]}{ people: [ { name: "earth", hair_color: "moon" } ] }{ people: [ { name: "earth", hair_color: "moon" } ] }{ people: [ { name: "earth", hair_color: "1 moon" } ] }{ people: [] }
With examples 😻[](#with-examples "Direct link to With examples 😻")
---------------------------------------------------------------------
Reference examples help fix the failure!
const text = "The solar system is large, but earth has only 1 moon.";for (let i = 0; i < 5; i++) { const result = await extractionRunnable.invoke({ text, // Example messages from above examples: exampleMessages, }); console.log(result);}
{ people: [] }{ people: [] }{ people: [] }{ people: [] }{ people: [] }
await extractionRunnable.invoke({ text: "My name is Hair-ison. My hair is black. I am 3 meters tall.", examples: exampleMessages,});
{ people: [ { name: "Hair-ison", hair_color: "black", height_in_meters: "3" } ]}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to improve extraction quality using few-shot examples.
Next, check out some of the other guides in this section, such as [some tips on how to perform extraction on long text](/v0.2/docs/how_to/extraction_long_text).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to select examples by similarity
](/v0.2/docs/how_to/example_selectors_similarity)[
Next
How to handle long text
](/v0.2/docs/how_to/extraction_long_text)
* [Define the schema](#define-the-schema)
* [Define reference examples](#define-reference-examples)
* [Create an extractor](#create-an-extractor)
* [Without examples 😿](#without-examples)
* [With examples 😻](#with-examples)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/graph_mapping | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to map values to a database
On this page
How to map values to a database
===============================
In this guide we’ll go over strategies to improve graph database query generation by mapping values from user inputs to database. When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database. Therefore, we can introduce a new step in graph database QA system to accurately map values.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver zod
yarn add langchain @langchain/community @langchain/openai neo4j-driver zod
pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Detecting entities in the user input[](#detecting-entities-in-the-user-input "Direct link to Detecting entities in the user input")
------------------------------------------------------------------------------------------------------------------------------------
We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database.
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { z } from "zod";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const entities = z .object({ names: z .array(z.string()) .describe("All the person or movies appearing in the text"), }) .describe("Identifying information about entities.");const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are extracting person and movies from the text."], [ "human", "Use the given format to extract information from the following\ninput: {question}", ],]);const entityChain = prompt.pipe(llm.withStructuredOutput(entities));
We can test the entity extraction chain.
const entities = await entityChain.invoke({ question: "Who played in Casino movie?",});entities;
{ names: [ "Casino" ] }
We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings.
const matchQuery = `MATCH (p:Person|Movie)WHERE p.name CONTAINS $value OR p.title CONTAINS $valueRETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS typeLIMIT 1`;const matchToDatabase = async (values) => { let result = ""; for (const entity of values.names) { const response = await graph.query(matchQuery, { value: entity, }); if (response.length > 0) { result += `${entity} maps to ${response[0]["result"]} ${response[0]["type"]} in database\n`; } } return result;};await matchToDatabase(entities);
"Casino maps to Casino Movie in database\n"
Custom Cypher generating chain[](#custom-cypher-generating-chain "Direct link to Custom Cypher generating chain")
------------------------------------------------------------------------------------------------------------------
We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement. We will be using the LangChain expression language to accomplish that.
import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";// Generate Cypher statement based on natural language inputconst cypherTemplate = `Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:{schema}Entities in the question map to the following database values:{entities_list}Question: {question}Cypher query:`;const cypherPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Given an input question, convert it to a Cypher query. No pre-amble.", ], ["human", cypherTemplate],]);const llmWithStop = llm.bind({ stop: ["\nCypherResult:"] });const cypherResponse = RunnableSequence.from([ RunnablePassthrough.assign({ names: entityChain }), RunnablePassthrough.assign({ entities_list: async (x) => matchToDatabase(x.names), schema: async (_) => graph.getSchema(), }), cypherPrompt, llmWithStop, new StringOutputParser(),]);
const cypher = await cypherResponse.invoke({ question: "Who played in Casino movie?",});cypher;
'MATCH (:Movie {title: "Casino"})<-[:ACTED_IN]-(actor)\nRETURN actor.name'
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to construct knowledge graphs
](/v0.2/docs/how_to/graph_constructing)[
Next
How to improve results with prompting
](/v0.2/docs/how_to/graph_prompting)
* [Setup](#setup)
* [Detecting entities in the user input](#detecting-entities-in-the-user-input)
* [Custom Cypher generating chain](#custom-cypher-generating-chain)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/graph_prompting | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to improve results with prompting
On this page
How to improve results with prompting
=====================================
In this guide we’ll go over prompting strategies to improve graph database query generation. We’ll largely focus on methods for getting relevant database-specific information in your prompt.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver
yarn add langchain @langchain/community @langchain/openai neo4j-driver
pnpm add langchain @langchain/community @langchain/openai neo4j-driver
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Filtering graph schema
======================
At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements. Let’s say we are dealing with the following graph schema:
await graph.refreshSchema();console.log(graph.schema);
Node properties are the following:Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING}, Person {name: STRING}, Genre {name: STRING}, Chunk {embedding: LIST, id: STRING, text: STRING}Relationship properties are the following:The relationships are the following:(:Movie)-[:IN_GENRE]->(:Genre), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:ACTED_IN]->(:Movie)
Few-shot examples[](#few-shot-examples "Direct link to Few-shot examples")
---------------------------------------------------------------------------
Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries.
Let’s say we have the following examples:
const examples = [ { question: "How many artists are there?", query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", }, { question: "Which actors played in the movie Casino?", query: "MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name", }, { question: "How many movies has Tom Hanks acted in?", query: "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", }, { question: "List all the genres of the movie Schindler's List", query: "MATCH (m:Movie {{title: 'Schindler\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name", }, { question: "Which actors have worked in movies from both the comedy and action genres?", query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name", }, { question: "Which directors have made movies with at least three different actors named 'John'?", query: "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name", }, { question: "Identify movies where directors also played a role in the film.", query: "MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name", }, { question: "Find the actor with the highest number of movies in the database.", query: "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1", },];
We can create a few-shot prompt with them like so:
import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";const examplePrompt = PromptTemplate.fromTemplate( "User input: {question}\nCypher query: {query}");const prompt = new FewShotPromptTemplate({ examples: examples.slice(0, 5), examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"],});
console.log( await prompt.format({ question: "How many artists are there?", schema: "foo", }));
You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.Here is the schema informationfoo.Below are a number of examples of questions and their corresponding Cypher queries.User input: How many artists are there?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)User input: Which actors played in the movie Casino?Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.nameUser input: How many movies has Tom Hanks acted in?Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)User input: List all the genres of the movie Schindler's ListCypher query: MATCH (m:Movie {title: 'Schindler\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.nameUser input: Which actors have worked in movies from both the comedy and action genres?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.nameUser input: How many artists are there?Cypher query:
Dynamic few-shot examples[](#dynamic-few-shot-examples "Direct link to Dynamic few-shot examples")
---------------------------------------------------------------------------------------------------
If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don’t fit in the model’s context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.
We can do just this using an ExampleSelector. In this case we’ll use a [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones:
import { OpenAIEmbeddings } from "@langchain/openai";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( examples, new OpenAIEmbeddings(), Neo4jVectorStore, { k: 5, inputKeys: ["question"], preDeleteCollection: true, url, username, password, });
await exampleSelector.selectExamples({ question: "how many artists are there?",});
[ { query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", question: "How many artists are there?" }, { query: "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", question: "How many movies has Tom Hanks acted in?" }, { query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE"... 84 more characters, question: "Which actors have worked in movies from both the comedy and action genres?" }, { query: "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH"... 71 more characters, question: "Which directors have made movies with at least three different actors named 'John'?" }, { query: "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DES"... 9 more characters, question: "Find the actor with the highest number of movies in the database." }]
To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:
const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"],});
console.log( await prompt.format({ question: "how many artists are there?", schema: "foo", }));
You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.Here is the schema informationfoo.Below are a number of examples of questions and their corresponding Cypher queries.User input: How many artists are there?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)User input: How many movies has Tom Hanks acted in?Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)User input: Which actors have worked in movies from both the comedy and action genres?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.nameUser input: Which directors have made movies with at least three different actors named 'John'?Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.nameUser input: Find the actor with the highest number of movies in the database.Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1User input: how many artists are there?Cypher query:
import { ChatOpenAI } from "@langchain/openai";import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = GraphCypherQAChain.fromLLM({ graph, llm, cypherPrompt: prompt,});
await chain.invoke({ query: "How many actors are in the graph?",});
{ result: "There are 967 actors in the graph." }
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to map values to a database
](/v0.2/docs/how_to/graph_mapping)[
Next
How to add a semantic layer over the database
](/v0.2/docs/how_to/graph_semantic)
* [Setup](#setup)
* [Few-shot examples](#few-shot-examples)
* [Dynamic few-shot examples](#dynamic-few-shot-examples)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/graph_semantic | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add a semantic layer over the database
On this page
How to add a semantic layer over the database
=============================================
You can use database queries to retrieve information from a graph database like Neo4j. One option is to use LLMs to generate Cypher statements. While that option provides excellent flexibility, the solution could be brittle and not consistently generating precise Cypher statements. Instead of generating Cypher statements, we can implement Cypher templates as tools in a semantic layer that an LLM agent can interact with.
![graph_semantic.png](/v0.2/assets/images/graph_semantic-365248d76b7862193c33f44eaa6ecaeb.png)
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver zod
yarn add langchain @langchain/community @langchain/openai neo4j-driver zod
pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Custom tools with Cypher templates[](#custom-tools-with-cypher-templates "Direct link to Custom tools with Cypher templates")
------------------------------------------------------------------------------------------------------------------------------
A semantic layer consists of various tools exposed to an LLM that it can use to interact with a knowledge graph. They can be of various complexity. You can think of each tool in a semantic layer as a function.
The function we will implement is to retrieve information about movies or their cast.
const descriptionQuery = `MATCH (m:Movie|Person)WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidateMATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as namesWITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as typesWITH m, collect(types) as contextsWITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name) + "\nyear: "+coalesce(m.released,"") +"\n" + reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as contextRETURN context LIMIT 1`;const getInformation = async (entity: string) => { try { const data = await graph.query(descriptionQuery, { candidate: entity }); return data[0]["context"]; } catch (error) { return "No information was found"; }};
You can observe that we have defined the Cypher statement used to retrieve information. Therefore, we can avoid generating Cypher statements and use the LLM agent to only populate the input parameters. To provide additional information to an LLM agent about when to use the tool and their input parameters, we wrap the function as a tool.
import { StructuredTool } from "@langchain/core/tools";import { z } from "zod";const informationInput = z.object({ entity: z.string().describe("movie or a person mentioned in the question"),});class InformationTool extends StructuredTool { schema = informationInput; name = "Information"; description = "useful for when you need to answer questions about various actors or movies"; async _call(input: z.infer<typeof informationInput>): Promise<string> { return getInformation(input.entity); }}
OpenAI Agent[](#openai-agent "Direct link to OpenAI Agent")
------------------------------------------------------------
LangChain expression language makes it very convenient to define an agent to interact with a graph database over the semantic layer.
import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { formatToOpenAIFunctionMessages } from "langchain/agents/format_scratchpad";import { OpenAIFunctionsAgentOutputParser } from "langchain/agents/openai/output_parser";import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";import { RunnableSequence } from "@langchain/core/runnables";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const tools = [new InformationTool()];const llmWithTools = llm.bind({ functions: tools.map(convertToOpenAIFunction),});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant that finds information about movies and recommends them. If tools require follow up questions, make sure to ask the user for clarification. Make sure to include any available options that need to be clarified in the follow up questions Do only the things the user specifically requested.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const _formatChatHistory = (chatHistory) => { const buffer: Array<BaseMessage> = []; for (const [human, ai] of chatHistory) { buffer.push(new HumanMessage({ content: human })); buffer.push(new AIMessage({ content: ai })); } return buffer;};const agent = RunnableSequence.from([ { input: (x) => x.input, chat_history: (x) => { if ("chat_history" in x) { return _formatChatHistory(x.chat_history); } return []; }, agent_scratchpad: (x) => { if ("steps" in x) { return formatToOpenAIFunctionMessages(x.steps); } return []; }, }, prompt, llmWithTools, new OpenAIFunctionsAgentOutputParser(),]);const agentExecutor = new AgentExecutor({ agent, tools });
await agentExecutor.invoke({ input: "Who played in Casino?" });
{ input: "Who played in Casino?", output: 'The movie "Casino" starred James Woods, Joe Pesci, Robert De Niro, and Sharon Stone.'}
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to improve results with prompting
](/v0.2/docs/how_to/graph_prompting)[
Next
How to reindex data to keep your vectorstore in-sync with the underlying data source
](/v0.2/docs/how_to/indexing)
* [Setup](#setup)
* [Custom tools with Cypher templates](#custom-tools-with-cypher-templates)
* [OpenAI Agent](#openai-agent)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/debugging | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to debug your LLM apps
On this page
How to debug your LLM apps
==========================
Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.
Here are a few different tools and functionalities to aid in debugging.
Tracing[](#tracing "Direct link to Tracing")
---------------------------------------------
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
import { ChatAnthropic } from "@langchain/anthropic";import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { Calculator } from "@langchain/community/tools/calculator";const tools = [new TavilySearchResults(), new Calculator()];// Prompt template must have "input" and "agent_scratchpad input variablesconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});const agent = await createToolCallingAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,});const result = await agentExecutor.invoke({ input: "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?",});console.log(result);
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [TavilySearchResults](https://v02.api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search`
* [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
{ input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: 'So Christopher Nolan, the director of the 2023 film Oppenheimer, is 53 years old, which is approximately 19,345 days old (assuming 365 days per year).'}
We don't get much output, but since we set up LangSmith we can easily see what happened under the hood:
[https://smith.langchain.com/public/fd3a4aa1-dfea-4d17-9d44-a306e7b230d3/r](https://smith.langchain.com/public/fd3a4aa1-dfea-4d17-9d44-a306e7b230d3/r)
`verbose`[](#verbose "Direct link to verbose")
-----------------------------------------------
If you're prototyping in Jupyter Notebooks or running Node scripts, it can be helpful to print out the intermediate steps of a chain run.
There are a number of ways to enable printing at varying degrees of verbosity.
### `{ verbose: true }`[](#-verbose-true- "Direct link to -verbose-true-")
Setting the `verbose` parameter will cause any LangChain component with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs.
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { Calculator } from "@langchain/community/tools/calculator";const tools = [ new TavilySearchResults({ verbose: true }), new Calculator({ verbose: true }),];// Prompt template must have "input" and "agent_scratchpad input variablesconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0, verbose: true,});const agent = await createToolCallingAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: true,});const result = await agentExecutor.invoke({ input: "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?",});console.log(result);
#### API Reference:
* [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [TavilySearchResults](https://v02.api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search`
* [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
Console output
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?"}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": []}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] [0ms] Exiting Chain run with output: { "output": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] [1ms] Exiting Chain run with output: { "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] [1ms] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [], "agent_scratchpad": []}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [], "agent_scratchpad": []}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] [0ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } } ] ]}[llm/start] [1:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] [1.98s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[llm/end] [1:llm:ChatAnthropic] [1.98s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] [0ms] Exiting Chain run with output: { "output": [ { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] [1.98s] Exiting Chain run with output: { "output": [ { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[agent/action] [1:chain:AgentExecutor] Agent selected action: { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ]}[tool/start] [1:chain:AgentExecutor > 9:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:chain:AgentExecutor > 9:tool:TavilySearchResults] [2.20s] Exiting Tool run with output: "[{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.96643,"raw_content":null},{"title":"Christopher Nolan's Oppenheimer - Rotten Tomatoes","url":"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/","content":"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.","score":0.92804,"raw_content":null},{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.92404,"raw_content":null},{"title":"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \"I Try to ...","url":"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/","content":"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\nRELATED:\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\nCONNECT FacebookTwitterInstagram\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\n Subscribe\nEverything Zoomer\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.","score":0.92002,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.91831,"raw_content":null}]"[tool/end] [1:tool:TavilySearchResults] [2.20s] Exiting Tool run with output: "[{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.96643,"raw_content":null},{"title":"Christopher Nolan's Oppenheimer - Rotten Tomatoes","url":"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/","content":"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.","score":0.92804,"raw_content":null},{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.92404,"raw_content":null},{"title":"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \"I Try to ...","url":"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/","content":"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\nRELATED:\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\nCONNECT FacebookTwitterInstagram\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\n Subscribe\nEverything Zoomer\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.","score":0.92002,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.91831,"raw_content":null}]"[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" } ]}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap > 13:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap > 13:chain:RunnableLambda] [1ms] Exiting Chain run with output: { "output": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign > 12:chain:RunnableMap] [2ms] Exiting Chain run with output: { "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 11:chain:RunnableAssign] [3ms] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 14:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 14:prompt:ChatPromptTemplate] [2ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 15:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ] ]}[llm/start] [1:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 15:llm:ChatAnthropic] [3.50s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[llm/end] [1:llm:ChatAnthropic] [3.50s] Exiting LLM run with output: { "generations": [ [ { "text": "", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 16:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent > 16:parser:ToolCallingAgentOutputParser] [1ms] Exiting Chain run with output: { "output": [ { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[chain/end] [1:chain:AgentExecutor > 10:chain:ToolCallingAgent] [3.51s] Exiting Chain run with output: { "output": [ { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] } ]}[agent/action] [1:chain:AgentExecutor] Agent selected action: { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ]}[tool/start] [1:chain:AgentExecutor > 17:tool:Calculator] Entering Tool run with input: "52 * 365"[tool/start] [1:tool:Calculator] Entering Tool run with input: "52 * 365"[tool/end] [1:chain:AgentExecutor > 17:tool:Calculator] [3ms] Exiting Tool run with output: "18980"[tool/end] [1:tool:Calculator] [3ms] Exiting Tool run with output: "18980"[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" }, { "action": { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "18980" } ]}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap] Entering Chain run with input: { "input": ""}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap > 21:chain:RunnableLambda] Entering Chain run with input: { "input": ""}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap > 21:chain:RunnableLambda] [1ms] Exiting Chain run with output: { "output": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign > 20:chain:RunnableMap] [2ms] Exiting Chain run with output: { "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 19:chain:RunnableAssign] [4ms] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" }, { "action": { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "18980" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 22:prompt:ChatPromptTemplate] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "steps": [ { "action": { "tool": "tavily_search_results_json", "toolInput": { "input": "Oppenheimer 2023 film director age" }, "toolCallId": "toolu_01NUVejujVo2y8WGVtZ49KAN", "log": "Invoking \"tavily_search_results_json\" with {\"input\":\"Oppenheimer 2023 film director age\"}\n[{\"type\":\"tool_use\",\"id\":\"toolu_01NUVejujVo2y8WGVtZ49KAN\",\"name\":\"tavily_search_results_json\",\"input\":{\"input\":\"Oppenheimer 2023 film director age\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]" }, { "action": { "tool": "calculator", "toolInput": { "input": "52 * 365" }, "toolCallId": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "log": "Invoking \"calculator\" with {\"input\":\"52 * 365\"}\n[{\"type\":\"text\",\"text\":\"Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\\n\\n- He is a British-American film director, producer and screenwriter.\\n- He was born on July 30, 1970, making him currently 52 years old.\\n\\nTo calculate his age in days:\"},{\"type\":\"tool_use\",\"id\":\"toolu_01NVTbm5aNYSm1wGYb6XF7jE\",\"name\":\"calculator\",\"input\":{\"input\":\"52 * 365\"}}]", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } } ] }, "observation": "18980" } ], "agent_scratchpad": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ]}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 22:prompt:ChatPromptTemplate] [2ms] Exiting Chain run with output: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "prompt_values", "ChatPromptValue" ], "kwargs": { "messages": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ] }}[llm/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 23:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ] ]}[llm/start] [1:llm:ChatAnthropic] Entering LLM run with input: { "messages": [ [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "SystemMessage" ], "kwargs": { "content": "You are a helpful assistant", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "HumanMessage" ], "kwargs": { "content": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "additional_kwargs": {}, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "tool_use", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "name": "tavily_search_results_json", "input": { "input": "Oppenheimer 2023 film director age" } } ], "additional_kwargs": { "id": "msg_015MqAHr84dBCAqBgjou41Km", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 409, "output_tokens": 68 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "tavily_search_results_json", "args": "{\"input\":\"Oppenheimer 2023 film director age\"}", "id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "index": 0 } ], "tool_calls": [ { "name": "tavily_search_results_json", "args": { "input": "Oppenheimer 2023 film director age" }, "id": "toolu_01NUVejujVo2y8WGVtZ49KAN" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NUVejujVo2y8WGVtZ49KAN", "content": "[{\"title\":\"Oppenheimer (2023) - IMDb\",\"url\":\"https://www.imdb.com/title/tt15398776/\",\"content\":\"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.\",\"score\":0.96643,\"raw_content\":null},{\"title\":\"Christopher Nolan's Oppenheimer - Rotten Tomatoes\",\"url\":\"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/\",\"content\":\"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.\",\"score\":0.92804,\"raw_content\":null},{\"title\":\"Oppenheimer (film) - Wikipedia\",\"url\":\"https://en.wikipedia.org/wiki/Oppenheimer_(film)\",\"content\":\"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\\nCritical response\\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \\\"more objective view of his story from a different character's point of view\\\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \\\"big-atures\\\", since the special effects team had tried to build the models as physically large as possible. He felt that \\\"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\\\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \\\"emotional\\\" and resembling that of a thriller, while also remarking that Nolan had \\\"Trojan-Horsed a biopic into a thriller\\\".[72]\\nCasting\\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\\\", while also underscoring that it is a \\\"huge shift in perception about the reality of Oppenheimer's perception\\\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \\\"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\\\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.\",\"score\":0.92404,\"raw_content\":null},{\"title\":\"'Oppenheimer' Director Christopher Nolan On Filmmaking at 53: \\\"I Try to ...\",\"url\":\"https://www.everythingzoomer.com/arts-entertainment/2023/11/21/oppenheimer-director-christopher-nolan-on-filmmaking-at-53-i-try-to-challenge-myself-with-every-film/\",\"content\":\"Oppenheimer will be available to own on 4K Ultra HD, Blu-ray and DVD — including more than three hours of bonus features — on November 21.\\nRELATED:\\nVisiting the Trinity Site Featured in ‘Oppenheimer’ Is a Sobering Reminder of the Horror of Nuclear Weapons\\nBarbenheimer: How ‘Barbie’ and ‘Oppenheimer’ Became the Unlikely Movie Marriage of the Summer\\nBlast From the Past: ‘Asteroid City’ & ‘Oppenheimer’ and the Age of Nuclear Anxiety\\nEXPLORE HealthMoneyTravelFoodStyleBook ClubClassifieds#ZoomerDailyPolicy & PerspectiveArts & EntertainmentStars & RoyaltySex & Love\\nCONNECT FacebookTwitterInstagram\\nSUBSCRIBE Terms of Subscription ServiceE-NewslettersSubscribe to Zoomer Magazine\\nBROWSE AboutMastheadContact UsAdvertise with UsPrivacy Policy\\nEverythingZoomer.com is part of the ZoomerMedia Digital Network “I think with experience — and with the experience of watching your films with an audience over the years — you do more and more recognize the human elements that people respond to, and the things that move you and the things that move the audience.”\\n “What’s interesting, as you watch the films over time, is that some of his preoccupations are the same, but then some of them have changed over time with who he is as a person and what’s going on in his own life,” Thomas said.\\n The British-American director’s latest explosive drama, Oppenheimer, which has earned upwards of US$940 million at the global box office, follows theoretical physicist J. Robert Oppenheimer (played by Cillian Murphy) as he leads the team creating the first atomic bomb, as director of the Manhattan Project’s Los Alamos Laboratory.\\n Subscribe\\nEverything Zoomer\\n‘Oppenheimer’ Director Christopher Nolan On Filmmaking at 53: “I Try to Challenge Myself with Every Film”\\nDirector Christopher Nolan poses upon his arrival for the premiere of the movie 'Oppenheimer' in Paris on July 11, 2023.\",\"score\":0.92002,\"raw_content\":null},{\"title\":\"'Oppenheimer' Review: A Man for Our Time - The New York Times\",\"url\":\"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html\",\"content\":\"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\\n\",\"score\":0.91831,\"raw_content\":null}]", "additional_kwargs": { "name": "tavily_search_results_json" }, "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": [ { "type": "text", "text": "Based on the search results, the 2023 film Oppenheimer was directed by Christopher Nolan. Some key information about Christopher Nolan:\n\n- He is a British-American film director, producer and screenwriter.\n- He was born on July 30, 1970, making him currently 52 years old.\n\nTo calculate his age in days:" }, { "type": "tool_use", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "name": "calculator", "input": { "input": "52 * 365" } } ], "additional_kwargs": { "id": "msg_01RBDqmJKNXiEjgt5Xrng4mz", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2810, "output_tokens": 137 }, "stop_reason": "tool_use" }, "tool_call_chunks": [ { "name": "calculator", "args": "{\"input\":\"52 * 365\"}", "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "index": 0 } ], "tool_calls": [ { "name": "calculator", "args": { "input": "52 * 365" }, "id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE" } ], "invalid_tool_calls": [], "response_metadata": {} } }, { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "ToolMessage" ], "kwargs": { "tool_call_id": "toolu_01NVTbm5aNYSm1wGYb6XF7jE", "content": "18980", "additional_kwargs": { "name": "calculator" }, "response_metadata": {} } } ] ]}[llm/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 23:llm:ChatAnthropic] [2.16s] Exiting LLM run with output: { "generations": [ [ { "text": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "additional_kwargs": { "id": "msg_01TYp6vJRKJQgXXRoqVrDGTR", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2960, "output_tokens": 51 }, "stop_reason": "end_turn" }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[llm/end] [1:llm:ChatAnthropic] [2.16s] Exiting LLM run with output: { "generations": [ [ { "text": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "additional_kwargs": { "id": "msg_01TYp6vJRKJQgXXRoqVrDGTR", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2960, "output_tokens": 51 }, "stop_reason": "end_turn" }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} } } } ] ]}[chain/start] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 24:parser:ToolCallingAgentOutputParser] Entering Chain run with input: { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessageChunk" ], "kwargs": { "content": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year).", "additional_kwargs": { "id": "msg_01TYp6vJRKJQgXXRoqVrDGTR", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 2960, "output_tokens": 51 }, "stop_reason": "end_turn" }, "tool_call_chunks": [], "tool_calls": [], "invalid_tool_calls": [], "response_metadata": {} }}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent > 24:parser:ToolCallingAgentOutputParser] [2ms] Exiting Chain run with output: { "returnValues": { "output": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)." }, "log": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)."}[chain/end] [1:chain:AgentExecutor > 18:chain:ToolCallingAgent] [2.20s] Exiting Chain run with output: { "returnValues": { "output": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)." }, "log": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)."}[chain/end] [1:chain:AgentExecutor] [9.92s] Exiting Chain run with output: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "output": "So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is approximately 18,980 days old (assuming 365 days per year)."}
### `Tool({ ..., verbose: true })`[](#tool--verbose-true- "Direct link to tool--verbose-true-")
You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object).
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { Calculator } from "@langchain/community/tools/calculator";const tools = [ new TavilySearchResults({ verbose: true }), new Calculator({ verbose: true }),];// Prompt template must have "input" and "agent_scratchpad input variablesconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0, verbose: false,});const agent = await createToolCallingAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: false,});const result = await agentExecutor.invoke({ input: "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?",});console.log(result);
#### API Reference:
* [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [TavilySearchResults](https://v02.api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search`
* [Calculator](https://v02.api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
Console output
[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:tool:TavilySearchResults] [1.95s] Exiting Tool run with output: "[{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.97519,"raw_content":null},{"title":"Oppenheimer's Grandson Reacts to New Christopher Nolan Film | TIME","url":"https://time.com/6297743/oppenheimer-grandson-movie-interview/","content":"July 25, 2023 3:32 PM EDT. M oviegoers turned out in droves this weekend for writer-director Christopher Nolan's new film Oppenheimer, fueling an expectations-shattering domestic box office debut ...","score":0.95166,"raw_content":null},{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.95127,"raw_content":null},{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.92204,"raw_content":null},{"title":"Oppenheimer (2023) - Full Cast & Crew - IMDb","url":"https://www.imdb.com/title/tt15398776/fullcredits/","content":"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. Movies. Release Calendar Top 250 Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Movie News India Movie Spotlight. ... Peter Oppenheimer - Age 8 (uncredited) Adam Walker Federman ... MIT Student ...","score":0.92179,"raw_content":null}]"[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Christopher Nolan age"[tool/end] [1:tool:TavilySearchResults] [1.15s] Exiting Tool run with output: "[{"title":"Christopher Nolan - IMDb","url":"https://www.imdb.com/name/nm0634240/","content":"Christopher Nolan is a British-American writer-director-producer of acclaimed films such as Inception, The Dark Knight, and Interstellar. He was born on July 30, 1970, in London, England.","score":0.96627,"raw_content":null},{"title":"Christopher Nolan: Biography, Movie Director, Filmmaker","url":"https://www.biography.com/movies-tv/christopher-nolan","content":"To meet the team, visit our About Us page: https://www.biography.com/about/a43602329/about-us\nFilmmakers\nMatt Damon\nGreta Gerwig\nMartin Scorsese\nBradley Cooper\nJodie Foster\nDodi Fayed\nDrew Barrymore\nRyan Gosling Was Reluctant to Play Barbie’s Ken\nThe Actors in the Most Wes Anderson Movies\n“The Idol” Raises Eyesbrows at Cannes\n41 Inspiring Famous Women in History\nBen Affleck and Matt Damon’s Lifelong Friendship\nA Part of Hearst Digital Media\nWe may earn commission from links on this page, but we only recommend products we back.\n The Dark Knight and Inception\nIn July 2008, Nolan’s Batman sequel, The Dark Knight, opened and set the record as having the highest weekend gross in the United States, at $158 million; Knight went on to become one of the top five highest-grossing films in America. In the fall of 2014, Nolan returned to the big screen with Interstellar, a nearly three-hour sci-fi epic that follows the journey of a team of astronauts seeking a new world for the inhabitants of a besieged Earth. The director's career then traveled into the stratosphere, when he agreed to helm the re-launch of the comic book hero Batman with the 2005 film Batman Begins, starring Christian Bale as the titular character. Built around three storylines offering different perspectives on a dramatic turn of events in 1940, Dunkirk earned mostly rave reviews for its portrayals of the tensions and terrors of war, picking up Golden Globe nominations for Best Motion Picture—Drama and Best Director, as well as an Academy Award nod for Best Director.\n","score":0.95669,"raw_content":null},{"title":"Christopher Nolan - Biography - IMDb","url":"https://www.imdb.com/name/nm0634240/bio/","content":"Learn about the life and career of acclaimed writer-director Christopher Nolan, who was born on July 30, 1970, in London, England. Find out his filmography, awards, family, trivia and more on IMDb.","score":0.91217,"raw_content":null},{"title":"Christopher Nolan - Wikipedia","url":"https://en.wikipedia.org/wiki/Christopher_Nolan","content":"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batman—along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neeson—Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\".[68]\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \"Inception became a classic almost as soon as it was projected on silver screens\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836 million worldwide.[98] Nominated for eight Academy Awards—including Best Picture and Best Original Screenplay—it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \"a heterogeneity of conditions of products\" extending from low-budget films to lucrative blockbusters, \"a wide range of genres and settings\" and \"a diversity of styles that trumpet his versatility\".[193]\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \"experimental impulses\" with the demands of mainstream entertainment, describing his oeuvre as \"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \"kept a viable alternate model of big-budget filmmaking alive\", in an era where blockbuster filmmaking has become \"a largely computer-generated art form\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \"the dynamic of a story of the city, a large crime story ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]","score":0.90288,"raw_content":null},{"title":"Christopher Nolan | Biography, Movies, Batman, Oppenheimer, & Facts ...","url":"https://www.britannica.com/biography/Christopher-Nolan-British-director","content":"The sci-fi drama depicted the efforts of a group of scientists to relocate humanity from an Earth vitiated by war and famine to another planet by way of a wormhole. The film turns on this character’s attempt to move past the boundaries of the technology in order to actually plant an idea in a dreamer’s head. His 2023 film Oppenheimer, depicts J. Robert Oppenheimer’s role in the development of the atomic bomb and the later security hearing over his alleged ties to communism. It used a destabilizing reverse-order story line to mirror the fractured mental state of its protagonist, a man with short-term amnesia who is trying to track down the person who murdered his wife. The Dark Knight (2008) leaned even more heavily on the moral and structural decay of its setting, fictional Gotham City, and it revived such classic Batman villains as the Joker (played by Heath Ledger).","score":0.90219,"raw_content":null}]"[tool/start] [1:tool:Calculator] Entering Tool run with input: "(2023 - 1970) * 365"[tool/end] [1:tool:Calculator] [3ms] Exiting Tool run with output: "19345"{ input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: 'So Christopher Nolan, the director of the 2023 film Oppenheimer, is currently 52 years old, which is 19,345 days old (assuming 365 days per year).'}MacBook-Pro-4:examples jacoblee$ yarn start examples/src/guides/debugging/simple_agent_verbose_some.ts(node:78812) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`:--import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("file%3A///Users/jacoblee/langchain/langchainjs/node_modules/tsx/dist/loader.js", pathToFileURL("./"));'(Use `node --trace-warnings ...` to show where the warning was created)[WARN]: You have enabled LangSmith tracing without backgrounding callbacks.[WARN]: If you are not using a serverless environment where you must wait for tracing calls to finish,[WARN]: we suggest setting "process.env.LANGCHAIN_CALLBACKS_BACKGROUND=true" to avoid additional latency.[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:tool:TavilySearchResults] [1.76s] Exiting Tool run with output: "[{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.97075,"raw_content":null},{"title":"Christopher Nolan's Oppenheimer - Rotten Tomatoes","url":"https://editorial.rottentomatoes.com/article/everything-we-know-about-christopher-nolans-oppenheimer/","content":"Billboards and movie theater pop-ups across Los Angeles have been ticking down for months now: Christopher Nolan's epic account of J. Robert Oppenheimer, the father of the atomic bomb, is nearing an explosive release on July 21, 2023. Nolan movies are always incredibly secretive, twists locked alongside totems behind safe doors, actors not spilling an ounce of Earl Grey tea.","score":0.9684,"raw_content":null},{"title":"Oppenheimer (2023) - Full Cast & Crew - IMDb","url":"https://www.imdb.com/title/tt15398776/fullcredits/","content":"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. Movies. Release Calendar Top 250 Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Movie News India Movie Spotlight. ... Peter Oppenheimer - Age 8 (uncredited) Adam Walker Federman ... MIT Student ...","score":0.94834,"raw_content":null},{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.92995,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.92512,"raw_content":null}]"[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Christopher Nolan age"[tool/end] [1:tool:TavilySearchResults] [1.69s] Exiting Tool run with output: "[{"title":"Christopher Nolan: Biography, Movie Director, Filmmaker","url":"https://www.biography.com/movies-tv/christopher-nolan","content":"To meet the team, visit our About Us page: https://www.biography.com/about/a43602329/about-us\nFilmmakers\nMatt Damon\nGreta Gerwig\nMartin Scorsese\nBradley Cooper\nJodie Foster\nDodi Fayed\nDrew Barrymore\nRyan Gosling Was Reluctant to Play Barbie’s Ken\nThe Actors in the Most Wes Anderson Movies\n“The Idol” Raises Eyesbrows at Cannes\n41 Inspiring Famous Women in History\nBen Affleck and Matt Damon’s Lifelong Friendship\nA Part of Hearst Digital Media\nWe may earn commission from links on this page, but we only recommend products we back.\n The Dark Knight and Inception\nIn July 2008, Nolan’s Batman sequel, The Dark Knight, opened and set the record as having the highest weekend gross in the United States, at $158 million; Knight went on to become one of the top five highest-grossing films in America. In the fall of 2014, Nolan returned to the big screen with Interstellar, a nearly three-hour sci-fi epic that follows the journey of a team of astronauts seeking a new world for the inhabitants of a besieged Earth. The director's career then traveled into the stratosphere, when he agreed to helm the re-launch of the comic book hero Batman with the 2005 film Batman Begins, starring Christian Bale as the titular character. Built around three storylines offering different perspectives on a dramatic turn of events in 1940, Dunkirk earned mostly rave reviews for its portrayals of the tensions and terrors of war, picking up Golden Globe nominations for Best Motion Picture—Drama and Best Director, as well as an Academy Award nod for Best Director.\n","score":0.96408,"raw_content":null},{"title":"Christopher Nolan - Biography - IMDb","url":"https://www.imdb.com/name/nm0634240/bio/","content":"Learn about the life and career of acclaimed writer-director Christopher Nolan, who was born on July 30, 1970, in London, England. Find out his filmography, awards, family, trivia and more on IMDb.","score":0.95409,"raw_content":null},{"title":"Christopher Nolan - IMDb","url":"https://www.imdb.com/name/nm0634240/","content":"Christopher Nolan is a British-American writer-director-producer of acclaimed films such as Inception, The Dark Knight, and Interstellar. He was born on July 30, 1970, in London, England.","score":0.95401,"raw_content":null},{"title":"Christopher Nolan - Wikipedia","url":"https://en.wikipedia.org/wiki/Christopher_Nolan","content":"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batman—along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neeson—Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\".[68]\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \"Inception became a classic almost as soon as it was projected on silver screens\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836 million worldwide.[98] Nominated for eight Academy Awards—including Best Picture and Best Original Screenplay—it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \"a heterogeneity of conditions of products\" extending from low-budget films to lucrative blockbusters, \"a wide range of genres and settings\" and \"a diversity of styles that trumpet his versatility\".[193]\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \"experimental impulses\" with the demands of mainstream entertainment, describing his oeuvre as \"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \"kept a viable alternate model of big-budget filmmaking alive\", in an era where blockbuster filmmaking has become \"a largely computer-generated art form\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \"the dynamic of a story of the city, a large crime story ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]","score":0.93205,"raw_content":null},{"title":"Christopher Nolan | Biography, Movies, Batman, Oppenheimer, & Facts ...","url":"https://www.britannica.com/biography/Christopher-Nolan-British-director","content":"The sci-fi drama depicted the efforts of a group of scientists to relocate humanity from an Earth vitiated by war and famine to another planet by way of a wormhole. The film turns on this character’s attempt to move past the boundaries of the technology in order to actually plant an idea in a dreamer’s head. His 2023 film Oppenheimer, depicts J. Robert Oppenheimer’s role in the development of the atomic bomb and the later security hearing over his alleged ties to communism. It used a destabilizing reverse-order story line to mirror the fractured mental state of its protagonist, a man with short-term amnesia who is trying to track down the person who murdered his wife. The Dark Knight (2008) leaned even more heavily on the moral and structural decay of its setting, fictional Gotham City, and it revived such classic Batman villains as the Joker (played by Heath Ledger).","score":0.90859,"raw_content":null}]" [tool/start] [1:tool:Calculator] Entering Tool run with input: "52 * 365"[tool/end] [1:tool:Calculator] [2ms] Exiting Tool run with output: "18980"{ input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: '<result>\nTherefore, Christopher Nolan is 18,980 days old.\n</result>'}MacBook-Pro-4:examples jacoblee$ yarn start examples/src/guides/debugging/simple_agent_verbose_some.ts(node:78844) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`:--import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("file%3A///Users/jacoblee/langchain/langchainjs/node_modules/tsx/dist/loader.js", pathToFileURL("./"));'(Use `node --trace-warnings ...` to show where the warning was created)[WARN]: You have enabled LangSmith tracing without backgrounding callbacks.[WARN]: If you are not using a serverless environment where you must wait for tracing calls to finish,[WARN]: we suggest setting "process.env.LANGCHAIN_CALLBACKS_BACKGROUND=true" to avoid additional latency.[tool/start] [1:tool:TavilySearchResults] Entering Tool run with input: "Oppenheimer 2023 film director age"[tool/end] [1:tool:TavilySearchResults] [2.63s] Exiting Tool run with output: "[{"title":"Oppenheimer (film) - Wikipedia","url":"https://en.wikipedia.org/wiki/Oppenheimer_(film)","content":"The film continued to hold well in the following weeks, making $32 million and $29.1 million in its fifth and sixth weekends.[174][175] As of September 10, 2023, the highest grossing territories were the United Kingdom ($72 million), Germany ($46.9 million), China ($46.8 million), France ($40.1 million) and Australia ($25.9 million).[176]\nCritical response\nThe film received critical acclaim.[a] Critics praised Oppenheimer primarily for its screenplay, the performances of the cast (particularly Murphy and Downey), and the visuals;[b] it was frequently cited as one of Nolan's best films,[191][192][183] and of 2023, although some criticism was aimed towards the writing of the female characters.[187] Hindustan Times reported that the film was also hailed as one of the best films of the 21st century.[193] He also chose to alternate between scenes in color and black-and-white to convey the story from both subjective and objective perspectives, respectively,[68] with most of Oppenheimer's view shown via the former, while the latter depicts a \"more objective view of his story from a different character's point of view\".[69][67] Wanting to make the film as subjective as possible, the production team decided to include visions of Oppenheimer's conceptions of the quantum world and waves of energy.[70] Nolan noted that Oppenheimer never publicly apologized for his role in the atomic bombings of Hiroshima and Nagasaki, but still desired to portray Oppenheimer as feeling genuine guilt for his actions, believing this to be accurate.[71]\nI think of any character I've dealt with, Oppenheimer is by far the most ambiguous and paradoxical. The production team was able to obtain government permission to film at White Sands Missile Range, but only at highly inconvenient hours, and therefore chose to film the scene elsewhere in the New Mexico desert.[2][95]\nThe production filmed the Trinity test scenes in Belen, New Mexico, with Murphy climbing a 100-foot steel tower, a replica of the original site used in the Manhattan Project, in rough weather.[2][95]\nA special set was built in which gasoline, propane, aluminum powder, and magnesium were used to create the explosive effect.[54] Although they used miniatures for the practical effect, the film's special effects supervisor Scott R. Fisher referred to them as \"big-atures\", since the special effects team had tried to build the models as physically large as possible. He felt that \"while our relationship with that [nuclear] fear has ebbed and flowed with time, the threat itself never actually went away\", and felt the 2022 Russian invasion of Ukraine had caused a resurgence of nuclear anxiety.[54] Nolan had also penned a script for a biopic of Howard Hughes approximately during the time of production of Martin Scorsese's The Aviator (2004), which had given him insight on how to write a script regarding a person's life.[53] Emily Blunt described the Oppenheimer script as \"emotional\" and resembling that of a thriller, while also remarking that Nolan had \"Trojan-Horsed a biopic into a thriller\".[72]\nCasting\nOppenheimer marks the sixth collaboration between Nolan and Murphy, and the first starring Murphy as the lead. [for Oppenheimer] in his approach to trying to deal with the consequences of what he'd been involved with\", while also underscoring that it is a \"huge shift in perception about the reality of Oppenheimer's perception\".[53] He wanted to execute a quick tonal shift after the atomic bombings of Hiroshima and Nagasaki, desiring to go from the \"highest triumphalism, the highest high, to the lowest low in the shortest amount of screen time possible\".[66] For the ending, Nolan chose to make it intentionally vague to be open to interpretation and refrained from being didactic or conveying specific messages in his work.","score":0.95617,"raw_content":null},{"title":"Oppenheimer (2023) - IMDb","url":"https://www.imdb.com/title/tt15398776/","content":"Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.","score":0.95378,"raw_content":null},{"title":"'Oppenheimer' Review: A Man for Our Time - The New York Times","url":"https://www.nytimes.com/2023/07/19/movies/oppenheimer-review-christopher-nolan.html","content":"Instead, it is here that the film’s complexities and all its many fragments finally converge as Nolan puts the finishing touches on his portrait of a man who contributed to an age of transformational scientific discovery, who personified the intersection of science and politics, including in his role as a Communist boogeyman, who was transformed by his role in the creation of weapons of mass destruction and soon after raised the alarm about the dangers of nuclear war.\n He served as director of a clandestine weapons lab built in a near-desolate stretch of Los Alamos, in New Mexico, where he and many other of the era’s most dazzling scientific minds puzzled through how to harness nuclear reactions for the weapons that killed tens of thousands instantly, ending the war in the Pacific.\n Nolan integrates these black-and-white sections with the color ones, using scenes from the hearing and the confirmation — Strauss’s role in the hearing and his relationship with Oppenheimer directly affected the confirmation’s outcome — to create a dialectical synthesis. To signal his conceit, he stamps the film with the words “fission” (a splitting into parts) and “fusion” (a merging of elements); Nolan being Nolan, he further complicates the film by recurrently kinking up the overarching chronology — it is a lot.\n It’s also at Berkeley that Oppenheimer meets the project’s military head, Leslie Groves (a predictably good Damon), who makes him Los Alamos’s director, despite the leftist causes he supported — among them, the fight against fascism during the Spanish Civil War — and some of his associations, including with Communist Party members like his brother, Frank (Dylan Arnold).\n","score":0.92271,"raw_content":null},{"title":"Oppenheimer (2023) - Full Cast & Crew - IMDb","url":"https://www.imdb.com/title/tt15398776/fullcredits/","content":"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. Movies. Release Calendar Top 250 Movies Most Popular Movies Browse Movies by Genre Top Box Office Showtimes & Tickets Movie News India Movie Spotlight. ... Peter Oppenheimer - Age 8 (uncredited) Adam Walker Federman ... MIT Student ...","score":0.91904,"raw_content":null},{"title":"Oppenheimer's Grandson Reacts to New Christopher Nolan Film | TIME","url":"https://time.com/6297743/oppenheimer-grandson-movie-interview/","content":"July 25, 2023 3:32 PM EDT. M oviegoers turned out in droves this weekend for writer-director Christopher Nolan's new film Oppenheimer, fueling an expectations-shattering domestic box office debut ...","score":0.91248,"raw_content":null}]"[tool/start] [1:tool:Calculator] Entering Tool run with input: "(2023 - 1970) * 365"[tool/end] [1:tool:Calculator] [2ms] Exiting Tool run with output: "19345"
{ input: 'Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?', output: "So as of 2023, Christopher Nolan's age is approximately 19,345 days.\n" + '\n' + 'In summary:\n' + '- The 2023 film Oppenheimer was directed by Christopher Nolan\n' + '- Nolan was born on July 30, 1970, making his current age around 53 years old\n' + '- Converted to days, Nolan is approximately 19,345 days old as of 2023'}
Other callbacks[](#other-callbacks "Direct link to Other callbacks")
---------------------------------------------------------------------
`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [`ConsoleCallbackHandler`](https://v02.api.js.langchain.com/classes/langchain_core_tracers_console.ConsoleCallbackHandler.html). You can also implement your own callbacks to execute custom functionality.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create custom Tools
](/v0.2/docs/how_to/custom_tools)[
Next
How to load CSV data
](/v0.2/docs/how_to/document_loader_csv)
* [Tracing](#tracing)
* [`verbose`](#verbose)
* [`{ verbose: true }`](#-verbose-true-)
* [`Tool({ ..., verbose: true })`](#tool--verbose-true-)
* [Other callbacks](#other-callbacks)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/indexing | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to reindex data to keep your vectorstore in-sync with the underlying data source
On this page
How to reindex data to keep your vectorstore in-sync with the underlying data source
====================================================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag/)
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
Here, we will look at a basic indexing workflow using the LangChain indexing API.
The indexing API lets you load and keep in sync documents from any source into a vector store. Specifically, it helps:
* Avoid writing duplicated content into the vector store
* Avoid re-writing unchanged content
* Avoid re-computing embeddings over unchanged content
All of which should save you time and money, as well as improve your vector search results.
Crucially, the indexing API will work even with documents that have gone through several transformation steps (e.g., via text chunking) with respect to the original source documents.
How it works[](#how-it-works "Direct link to How it works")
------------------------------------------------------------
LangChain indexing makes use of a record manager (`RecordManager`) that keeps track of document writes into the vector store.
When indexing content, hashes are computed for each document, and the following information is stored in the record manager:
* the document hash (hash of both page content and metadata)
* write time
* the source ID - each document should include information in its metadata to allow us to determine the ultimate source of this document
Deletion Modes[](#deletion-modes "Direct link to Deletion Modes")
------------------------------------------------------------------
When indexing documents into a vector store, it's possible that some existing documents in the vector store should be deleted. In certain situations you may want to remove any existing documents that are derived from the same sources as the new documents being indexed. In others you may want to delete all existing documents wholesale. The indexing API deletion modes let you pick the behavior you want:
Cleanup Mode
De-Duplicates Content
Parallelizable
Cleans Up Deleted Source Docs
Cleans Up Mutations of Source Docs and/or Derived Docs
Clean Up Timing
None
✅
✅
❌
❌
\-
Incremental
✅
✅
❌
✅
Continuously
Full
✅
❌
✅
✅
At end of indexing
`None` does not do any automatic clean up, allowing the user to manually do clean up of old content.
`incremental` and `full` offer the following automated clean up:
* If the content of the source document or derived documents has changed, both `incremental` or `full` modes will clean up (delete) previous versions of the content.
* If the source document has been deleted (meaning it is not included in the documents currently being indexed), the full cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not.
When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted.
* `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes.
* `full` mode does the clean up after all batches have been written.
Requirements[](#requirements "Direct link to Requirements")
------------------------------------------------------------
1. Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously.
2. Only works with LangChain `vectorstore`'s that support: a). document addition by id (`addDocuments` method with ids argument) b). delete by id (delete method with ids argument)
Compatible Vectorstores: [`PGVector`](/v0.2/docs/integrations/vectorstores/pgvector), [`Chroma`](/v0.2/docs/integrations/vectorstores/chroma), [`CloudflareVectorize`](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize), [`ElasticVectorSearch`](/v0.2/docs/integrations/vectorstores/elasticsearch), [`FAISS`](/v0.2/docs/integrations/vectorstores/faiss), [`MomentoVectorIndex`](/v0.2/docs/integrations/vectorstores/momento_vector_index), [`Pinecone`](/v0.2/docs/integrations/vectorstores/pinecone), [`SupabaseVectorStore`](/v0.2/docs/integrations/vectorstores/supabase), [`VercelPostgresVectorStore`](/v0.2/docs/integrations/vectorstores/vercel_postgres), [`Weaviate`](/v0.2/docs/integrations/vectorstores/weaviate), [`Xata`](/v0.2/docs/integrations/vectorstores/xata)
Caution[](#caution "Direct link to Caution")
---------------------------------------------
The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes).
If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content.
This is unlikely to be an issue in actual settings for the following reasons:
1. The `RecordManager` uses higher resolution timestamps.
2. The data would need to change between the first and the second tasks runs, which becomes unlikely if the time interval between the tasks is small.
3. Indexing tasks typically take more than a few ms.
Quickstart[](#quickstart "Direct link to Quickstart")
------------------------------------------------------
import { PostgresRecordManager } from "@langchain/community/indexes/postgres";import { index } from "langchain/indexes";import { PGVectorStore } from "@langchain/community/vectorstores/pgvector";import { PoolConfig } from "pg";import { OpenAIEmbeddings } from "@langchain/openai";import { CharacterTextSplitter } from "@langchain/textsplitters";import { BaseDocumentLoader } from "langchain/document_loaders/base";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst config = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "testlangchain", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },};const vectorStore = await PGVectorStore.initialize( new OpenAIEmbeddings(), config);// Create a new record managerconst recordManagerConfig = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5432, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "upsertion_records",};const recordManager = new PostgresRecordManager( "test_namespace", recordManagerConfig);// Create the schema if it doesn't existawait recordManager.createSchema();// Index some documentsconst doc1 = { pageContent: "kitty", metadata: { source: "kitty.txt" },};const doc2 = { pageContent: "doggy", metadata: { source: "doggy.txt" },};/** * Hacky helper method to clear content. See the `full` mode section to to understand why it works. */async function clear() { await index({ docsSource: [], recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, });}// No cleanupawait clear();// This mode does not do automatic clean up of old versions of content; however, it still takes care of content de-duplication.console.log( await index({ docsSource: [doc1, doc1, doc1, doc1, doc1, doc1], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/await clear();console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Second time around all content will be skippedconsole.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 2, }*/// Updated content will be added, but old won't be deletedconst doc1Updated = { pageContent: "kitty updated", metadata: { source: "kitty.txt" },};console.log( await index({ docsSource: [doc1Updated, doc2], recordManager, vectorStore, options: { cleanup: undefined, sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 0, numSkipped: 1, }*//*Resulting records in the database: [ { pageContent: "kitty", metadata: { source: "kitty.txt" }, }, { pageContent: "doggy", metadata: { source: "doggy.txt" }, }, { pageContent: "kitty updated", metadata: { source: "kitty.txt" }, } ]*/// Incremental modeawait clear();console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Indexing again should result in both documents getting skipped – also skipping the embedding operation!console.log( await index({ docsSource: [doc1, doc2], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 2, }*/// If we provide no documents with incremental indexing mode, nothing will change.console.log( await index({ docsSource: [], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// If we mutate a document, the new version will be written and all old versions sharing the same source will be deleted.// This only affects the documents with the same source id!const changedDoc1 = { pageContent: "kitty updated", metadata: { source: "kitty.txt" },};console.log( await index({ docsSource: [changedDoc1], recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/* { numAdded: 1, numUpdated: 0, numDeleted: 1, numSkipped: 0, }*/// Full modeawait clear();// In full mode the user should pass the full universe of content that should be indexed into the indexing function.// Any documents that are not passed into the indexing function and are present in the vectorStore will be deleted!// This behavior is useful to handle deletions of source documents.const allDocs = [doc1, doc2];console.log( await index({ docsSource: allDocs, recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, }));/* { numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0, }*/// Say someone deleted the first doc:const doc2Only = [doc2];// Using full mode will clean up the deleted content as well.// This afffects all documents regardless of source id!console.log( await index({ docsSource: doc2Only, recordManager, vectorStore, options: { cleanup: "full", sourceIdKey: "source", }, }));/* { numAdded: 0, numUpdated: 0, numDeleted: 1, numSkipped: 1, }*/await clear();const newDoc1 = { pageContent: "kitty kitty kitty kitty kitty", metadata: { source: "kitty.txt" },};const newDoc2 = { pageContent: "doggy doggy the doggy", metadata: { source: "doggy.txt" },};const splitter = new CharacterTextSplitter({ separator: "t", keepSeparator: true, chunkSize: 12, chunkOverlap: 2,});const newDocs = await splitter.splitDocuments([newDoc1, newDoc2]);console.log(newDocs);/*[ { pageContent: 'kitty kit', metadata: {source: 'kitty.txt'} }, { pageContent: 'tty kitty ki', metadata: {source: 'kitty.txt'} }, { pageContent: 'tty kitty', metadata: {source: 'kitty.txt'}, }, { pageContent: 'doggy doggy', metadata: {source: 'doggy.txt'}, { pageContent: 'the doggy', metadata: {source: 'doggy.txt'}, }]*/console.log( await index({ docsSource: newDocs, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 5, numUpdated: 0, numDeleted: 0, numSkipped: 0,}*/const changedDoggyDocs = [ { pageContent: "woof woof", metadata: { source: "doggy.txt" }, }, { pageContent: "woof woof woof", metadata: { source: "doggy.txt" }, },];console.log( await index({ docsSource: changedDoggyDocs, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 2, numUpdated: 0, numDeleted: 2, numSkipped: 0,}*/// Usage with document loaders// Create a document loaderclass MyCustomDocumentLoader extends BaseDocumentLoader { load() { return Promise.resolve([ { pageContent: "kitty", metadata: { source: "kitty.txt" }, }, { pageContent: "doggy", metadata: { source: "doggy.txt" }, }, ]); }}await clear();const loader = new MyCustomDocumentLoader();console.log( await index({ docsSource: loader, recordManager, vectorStore, options: { cleanup: "incremental", sourceIdKey: "source", }, }));/*{ numAdded: 2, numUpdated: 0, numDeleted: 0, numSkipped: 0,}*/// Closing resourcesawait recordManager.end();await vectorStore.end();
#### API Reference:
* [PostgresRecordManager](https://v02.api.js.langchain.com/classes/langchain_community_indexes_postgres.PostgresRecordManager.html) from `@langchain/community/indexes/postgres`
* index from `langchain/indexes`
* [PGVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.CharacterTextSplitter.html) from `@langchain/textsplitters`
* BaseDocumentLoader from `langchain/document_loaders/base`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to use indexing in your RAG pipelines.
Next, check out some of the other sections on retrieval.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add a semantic layer over the database
](/v0.2/docs/how_to/graph_semantic)[
Next
How to get log probabilities
](/v0.2/docs/how_to/logprobs)
* [How it works](#how-it-works)
* [Deletion Modes](#deletion-modes)
* [Requirements](#requirements)
* [Caution](#caution)
* [Quickstart](#quickstart)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/logprobs | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to get log probabilities
On this page
How to get log probabilities
============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
Certain chat models can be configured to return token-level log probabilities representing the likelihood of a given token. This guide walks through how to get this information in LangChain.
OpenAI[](#openai "Direct link to OpenAI")
------------------------------------------
Install the `@langchain/openai` package and set your API key:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
For the OpenAI API to return log probabilities, we need to set the `logprobs` param to `true`. Then, the logprobs are included on each output [`AIMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) as part of the `response_metadata`:
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o", logprobs: true,});const responseMessage = await model.invoke("how are you today?");responseMessage.response_metadata.logprobs.content.slice(0, 5);
[ { token: "Thank", logprob: -0.70174205, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }, { token: " asking", logprob: -0.0000013856493, bytes: [ 32, 97, 115, 107, 105, 110, 103 ], top_logprobs: [] }, { token: "!", logprob: -0.00030102333, bytes: [ 33 ], top_logprobs: [] }]
And are part of streamed Message chunks as well:
let count = 0;const stream = await model.stream("How are you today?");let aggregateResponse;for await (const chunk of stream) { if (count > 5) { break; } if (aggregateResponse === undefined) { aggregateResponse = chunk; } else { aggregateResponse = aggregateResponse.concat(chunk); } console.log(aggregateResponse.response_metadata.logprobs?.content); count++;}
[][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }, { token: " asking", logprob: -0.0000029352968, bytes: [ 32, 97, 115, 107, 105, 110, 103 ], top_logprobs: [] }][ { token: "Thank", logprob: -0.23375113, bytes: [ 84, 104, 97, 110, 107 ], top_logprobs: [] }, { token: " you", logprob: 0, bytes: [ 32, 121, 111, 117 ], top_logprobs: [] }, { token: " for", logprob: -0.000004723352, bytes: [ 32, 102, 111, 114 ], top_logprobs: [] }, { token: " asking", logprob: -0.0000029352968, bytes: [ 32, 97, 115, 107, 105, 110, 103 ], top_logprobs: [] }, { token: "!", logprob: -0.00039694557, bytes: [ 33 ], top_logprobs: [] }]
`topLogprobs`[](#toplogprobs "Direct link to toplogprobs")
-----------------------------------------------------------
To see alternate potential generations at each step, you can use the `topLogprobs` parameter:
const model = new ChatOpenAI({ model: "gpt-4o", logprobs: true, topLogprobs: 3,});const responseMessage = await model.invoke("how are you today?");responseMessage.response_metadata.logprobs.content.slice(0, 5);
[ { token: "I'm", logprob: -2.2864406, bytes: [ 73, 39, 109 ], top_logprobs: [ { token: "Thank", logprob: -0.28644064, bytes: [ 84, 104, 97, 110, 107 ] }, { token: "Hello", logprob: -2.0364406, bytes: [ 72, 101, 108, 108, 111 ] }, { token: "I'm", logprob: -2.2864406, bytes: [ 73, 39, 109 ] } ] }, { token: " just", logprob: -0.14442946, bytes: [ 32, 106, 117, 115, 116 ], top_logprobs: [ { token: " just", logprob: -0.14442946, bytes: [ 32, 106, 117, 115, 116 ] }, { token: " an", logprob: -2.2694294, bytes: [ 32, 97, 110 ] }, { token: " here", logprob: -4.0194297, bytes: [ 32, 104, 101, 114, 101 ] } ] }, { token: " a", logprob: -0.00066632946, bytes: [ 32, 97 ], top_logprobs: [ { token: " a", logprob: -0.00066632946, bytes: [ 32, 97 ] }, { token: " lines", logprob: -7.750666, bytes: [ 32, 108, 105, 110, 101, 115 ] }, { token: " an", logprob: -9.250667, bytes: [ 32, 97, 110 ] } ] }, { token: " computer", logprob: -0.015423919, bytes: [ 32, 99, 111, 109, 112, 117, 116, 101, 114 ], top_logprobs: [ { token: " computer", logprob: -0.015423919, bytes: [ 32, 99, 111, 109, 112, 117, 116, 101, 114 ] }, { token: " program", logprob: -5.265424, bytes: [ 32, 112, 114, 111, 103, 114, 97, 109 ] }, { token: " machine", logprob: -5.390424, bytes: [ 32, 109, 97, 99, 104, 105, 110, 101 ] } ] }, { token: " program", logprob: -0.0010724656, bytes: [ 32, 112, 114, 111, 103, 114, 97, 109 ], top_logprobs: [ { token: " program", logprob: -0.0010724656, bytes: [ 32, 112, 114, 111, 103, 114, 97, 109 ] }, { token: "-based", logprob: -6.8760724, bytes: [ 45, 98, 97, 115, 101, 100 ] }, { token: " algorithm", logprob: -10.626073, bytes: [ 32, 97, 108, 103, 111, 114, 105, 116, 104, 109 ] } ] }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to get logprobs from OpenAI models in LangChain.
Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output) or [how to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to reindex data to keep your vectorstore in-sync with the underlying data source
](/v0.2/docs/how_to/indexing)[
Next
How to add message history
](/v0.2/docs/how_to/message_history)
* [OpenAI](#openai)
* [`topLogprobs`](#toplogprobs)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/chat_streaming | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream chat model responses
On this page
How to stream chat model responses
==================================
All [chat models](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html) implement the [Runnable interface](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html), which comes with a **default** implementations of standard runnable methods (i.e. `invoke`, `batch`, `stream`, `streamEvents`).
The **default** streaming implementation provides an `AsyncGenerator` that yields a single value: the final output from the underlying chat model provider.
tip
The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the the model can be swapped in for any other model as it supports the same standard interface.
The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support.
See which [integrations support token-by-token streaming here](/v0.2/docs/integrations/chat/).
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
Below, we use a `---` to help visualize the delimiter between tokens.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
for await (const chunk of await model.stream( "Write me a 1 verse song about goldfish on the moon")) { console.log(`${chunk.content}---`);}
---Here--- is--- a------1------verse--- song--- about--- gol---dfish--- on--- the--- moon---:---Gol---dfish--- on--- the--- moon---,--- swimming--- through--- the--- sk---ies---,---Floating--- in--- the--- darkness---,--- beneath--- the--- lunar--- eyes---.---Weight---less--- as--- they--- drift---,--- through--- the--- endless--- voi---d,---D---rif---ting---,--- swimming---,--- exploring---,--- this--- new--- worl---d unexp---lo---ye---d.---------
Stream events[](#stream-events "Direct link to Stream events")
---------------------------------------------------------------
Chat models also support the standard [streamEvents()](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#streamEvents) method.
This method is useful if you’re streaming output from a larger LLM application that contains multiple steps (e.g., a chain composed of a prompt, chat model and parser).
let idx = 0;for await (const event of model.streamEvents( "Write me a 1 verse song about goldfish on the moon", { version: "v1", })) { idx += 1; if (idx >= 5) { console.log("...Truncated"); break; } console.log(event);}
{ run_id: "a84e1294-d281-4757-8f3f-dc4440612949", event: "on_llm_start", name: "ChatAnthropic", tags: [], metadata: {}, data: { input: "Write me a 1 verse song about goldfish on the moon" }}{ event: "on_llm_stream", run_id: "a84e1294-d281-4757-8f3f-dc4440612949", tags: [], metadata: {}, name: "ChatAnthropic", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { id: "msg_01DqDQ9in33ZhmrCzdZaRNMZ", type: "message", role: "assistant", model: "claude-3-haiku-20240307" }, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { id: "msg_01DqDQ9in33ZhmrCzdZaRNMZ", type: "message", role: "assistant", model: "claude-3-haiku-20240307" }, response_metadata: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } }}{ event: "on_llm_stream", run_id: "a84e1294-d281-4757-8f3f-dc4440612949", tags: [], metadata: {}, name: "ChatAnthropic", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Here", additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Here", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } }}{ event: "on_llm_stream", run_id: "a84e1294-d281-4757-8f3f-dc4440612949", tags: [], metadata: {}, name: "ChatAnthropic", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " is", additional_kwargs: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [], response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " is", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [], tool_call_chunks: [] } }}...Truncated
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now seen a few ways you can stream chat model responses.
Next, check out this guide for more on [streaming with other LangChain modules](/v0.2/docs/how_to/streaming).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream responses from an LLM
](/v0.2/docs/how_to/streaming_llm)[
Next
How to embed text data
](/v0.2/docs/how_to/embed_text)
* [Streaming](#streaming)
* [Stream events](#stream-events)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/chat_model_caching | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to cache chat model responses
On this page
How to cache chat model responses
=================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LLMs](/v0.2/docs/concepts/#llms)
LangChain provides an optional caching layer for chat models. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.
import { ChatOpenAI } from "@langchain/openai";// To make the caching really obvious, lets use a slower model.const model = new ChatOpenAI({ model: "gpt-4", cache: true,});
In Memory Cache[](#in-memory-cache "Direct link to In Memory Cache")
---------------------------------------------------------------------
The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.
console.time();// The first time, it is not yet in cache, so it should take longerconst res = await model.invoke("Tell me a joke!");console.log(res);console.timeEnd();/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ 'langchain_core', 'messages' ], content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } } default: 2.224s*/
console.time();// The second time it is, so it goes fasterconst res2 = await model.invoke("Tell me a joke!");console.log(res2);console.timeEnd();/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ 'langchain_core', 'messages' ], content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } } default: 181.98ms*/
Caching with Redis[](#caching-with-redis "Direct link to Caching with Redis")
------------------------------------------------------------------------------
LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package:
* npm
* Yarn
* pnpm
npm install ioredis @langchain/community
yarn add ioredis @langchain/community
pnpm add ioredis @langchain/community
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { ChatOpenAI } from "@langchain/openai";import { Redis } from "ioredis";import { RedisCache } from "@langchain/community/caches/ioredis";const client = new Redis("redis://localhost:6379");const cache = new RedisCache(client, { ttl: 60, // Optional key expiration value});const model = new ChatOpenAI({ cache });const response1 = await model.invoke("Do something random!");console.log(response1);/* AIMessage { content: "Sure! I'll generate a random number for you: 37", additional_kwargs: {} }*/const response2 = await model.invoke("Do something random!");console.log(response2);/* AIMessage { content: "Sure! I'll generate a random number for you: 37", additional_kwargs: {} }*/await client.disconnect();
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [RedisCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_ioredis.RedisCache.html) from `@langchain/community/caches/ioredis`
Caching on the File System[](#caching-on-the-file-system "Direct link to Caching on the File System")
------------------------------------------------------------------------------------------------------
danger
This cache is not recommended for production use. It is only intended for local development.
LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want.
const cache = await LocalFileCache.create();
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to cache model responses to save time and money.
Next, check out the other how-to guides on chat models, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output) or [how to create your own custom chat model](/v0.2/docs/how_to/custom_chat).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to cache model responses
](/v0.2/docs/how_to/llm_caching)[
Next
How to create a custom LLM class
](/v0.2/docs/how_to/custom_llm)
* [In Memory Cache](#in-memory-cache)
* [Caching with Redis](#caching-with-redis)
* [Caching on the File System](#caching-on-the-file-system)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/llm_caching | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to cache model responses
On this page
How to cache model responses
============================
LangChain provides an optional caching layer for LLMs. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct", cache: true,});
In Memory Cache[](#in-memory-cache "Direct link to In Memory Cache")
---------------------------------------------------------------------
The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.
console.time();// The first time, it is not yet in cache, so it should take longerconst res = await model.invoke("Tell me a long joke");console.log(res);console.timeEnd();/* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 4.187s*/
console.time();// The second time it is, so it goes fasterconst res2 = await model.invoke("Tell me a joke");console.log(res2);console.timeEnd();/* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 175.74ms*/
Caching with Momento[](#caching-with-momento "Direct link to Caching with Momento")
------------------------------------------------------------------------------------
LangChain also provides a Momento-based cache. [Momento](https://gomomento.com) is a distributed, serverless cache that requires zero setup or infrastructure maintenance. Given Momento's compatibility with Node.js, browser, and edge environments, ensure you install the relevant package.
To install for **Node.js**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk
yarn add @gomomento/sdk
pnpm add @gomomento/sdk
To install for **browser/edge workers**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk-web
yarn add @gomomento/sdk-web
pnpm add @gomomento/sdk-web
Next you'll need to sign up and create an API key. Once you've done that, pass a `cache` option when you instantiate the LLM like this:
import { OpenAI } from "@langchain/openai";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { MomentoCache } from "@langchain/community/caches/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [MomentoCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_momento.MomentoCache.html) from `@langchain/community/caches/momento`
Caching with Redis[](#caching-with-redis "Direct link to Caching with Redis")
------------------------------------------------------------------------------
LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package:
* npm
* Yarn
* pnpm
npm install ioredis
yarn add ioredis
pnpm add ioredis
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { OpenAI } from "@langchain/openai";import { RedisCache } from "@langchain/community/caches/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });
Caching with Upstash Redis[](#caching-with-upstash-redis "Direct link to Caching with Upstash Redis")
------------------------------------------------------------------------------------------------------
LangChain provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the `@upstash/redis` package:
* npm
* Yarn
* pnpm
npm install @upstash/redis
yarn add @upstash/redis
pnpm add @upstash/redis
You'll also need an [Upstash account](https://docs.upstash.com/redis#create-account) and a [Redis database](https://docs.upstash.com/redis#create-a-database) to connect to. Once you've done that, retrieve your REST URL and REST token.
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { OpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [UpstashRedisCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis`
You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance:
import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [UpstashRedisCache](https://v02.api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis`
Caching with Cloudflare KV[](#caching-with-cloudflare-kv "Direct link to Caching with Cloudflare KV")
------------------------------------------------------------------------------------------------------
info
This integration is only supported in Cloudflare Workers.
If you're deploying your project as a Cloudflare Worker, you can use LangChain's Cloudflare KV-powered LLM cache.
For information on how to set up KV in Cloudflare, see [the official documentation](https://developers.cloudflare.com/kv/).
**Note:** If you are using TypeScript, you may need to install types if they aren't already present:
* npm
* Yarn
* pnpm
npm install -S @cloudflare/workers-types
yarn add @cloudflare/workers-types
pnpm add @cloudflare/workers-types
import type { KVNamespace } from "@cloudflare/workers-types";import { OpenAI } from "@langchain/openai";import { CloudflareKVCache } from "@langchain/cloudflare";export interface Env { KV_NAMESPACE: KVNamespace; OPENAI_API_KEY: string;}export default { async fetch(_request: Request, env: Env) { try { const cache = new CloudflareKVCache(env.KV_NAMESPACE); const model = new OpenAI({ cache, model: "gpt-3.5-turbo-instruct", apiKey: env.OPENAI_API_KEY, }); const response = await model.invoke("How are you today?"); return new Response(JSON.stringify(response), { headers: { "content-type": "application/json" }, }); } catch (err: any) { console.log(err.message); return new Response(err.message, { status: 500 }); } },};
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [CloudflareKVCache](https://v02.api.js.langchain.com/classes/langchain_cloudflare.CloudflareKVCache.html) from `@langchain/cloudflare`
Caching on the File System[](#caching-on-the-file-system "Direct link to Caching on the File System")
------------------------------------------------------------------------------------------------------
danger
This cache is not recommended for production use. It is only intended for local development.
LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want.
const cache = await LocalFileCache.create();
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to cache model responses to save time and money.
Next, check out the other how-to guides on LLMs, like [how to create your own custom LLM class](/v0.2/docs/how_to/custom_llm).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use few shot examples in chat models
](/v0.2/docs/how_to/few_shot_examples_chat)[
Next
How to cache chat model responses
](/v0.2/docs/how_to/chat_model_caching)
* [In Memory Cache](#in-memory-cache)
* [Caching with Momento](#caching-with-momento)
* [Caching with Redis](#caching-with-redis)
* [Caching with Upstash Redis](#caching-with-upstash-redis)
* [Caching with Cloudflare KV](#caching-with-cloudflare-kv)
* [Caching on the File System](#caching-on-the-file-system)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/custom_llm | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a custom LLM class
On this page
How to create a custom LLM class
================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LLMs](/v0.2/docs/concepts/#llms)
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is directly supported in LangChain.
There are a few required things that a custom LLM needs to implement after extending the [`LLM` class](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.LLM.html):
* A `_call` method that takes in a string and call options (which includes things like `stop` sequences), and returns a string.
* A `_llmType` method that returns a string. Used for logging purposes only.
You can also implement the following optional method:
* A `_streamResponseChunks` method that returns an `AsyncIterator` and yields [`GenerationChunks`](https://v02.api.js.langchain.com/classes/langchain_core_outputs.GenerationChunk.html). This allows the LLM to support streaming outputs.
Let’s implement a very simple custom LLM that just echoes back the first `n` characters of the input.
import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";import type { CallbackManagerForLLMRun } from "langchain/callbacks";import { GenerationChunk } from "langchain/schema";export interface CustomLLMInput extends BaseLLMParams { n: number;}export class CustomLLM extends LLM { n: number; constructor(fields: CustomLLMInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( prompt: string, options: this["ParsedCallOptions"], runManager: CallbackManagerForLLMRun ): Promise<string> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); return prompt.slice(0, this.n); } async *_streamResponseChunks( prompt: string, options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<GenerationChunk> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of prompt.slice(0, this.n)) { yield new GenerationChunk({ text: letter, }); // Trigger the appropriate callback await runManager?.handleLLMNewToken(letter); } }}
We can now use this as any other LLM:
const llm = new CustomLLM({ n: 4 });await llm.invoke("I am an LLM");
I am
And support streaming:
const stream = await llm.stream("I am an LLM");for await (const chunk of stream) { console.log(chunk);}
Iam
Richer outputs[](#richer-outputs "Direct link to Richer outputs")
------------------------------------------------------------------
If you want to take advantage of LangChain's callback system for functionality like token tracking, you can extend the [`BaseLLM`](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.BaseLLM.html) class and implement the lower level `_generate` method. Rather than taking a single string as input and a single string output, it can take multiple input strings and map each to multiple string outputs. Additionally, it returns a `Generation` output with fields for additional metadata rather than just a string.
import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";import { LLMResult } from "@langchain/core/outputs";import { BaseLLM, BaseLLMCallOptions, BaseLLMParams,} from "@langchain/core/language_models/llms";export interface AdvancedCustomLLMCallOptions extends BaseLLMCallOptions {}export interface AdvancedCustomLLMParams extends BaseLLMParams { n: number;}export class AdvancedCustomLLM extends BaseLLM<AdvancedCustomLLMCallOptions> { n: number; constructor(fields: AdvancedCustomLLMParams) { super(fields); this.n = fields.n; } _llmType() { return "advanced_custom_llm"; } async _generate( inputs: string[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<LLMResult> { const outputs = inputs.map((input) => input.slice(0, this.n)); // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); // One input could generate multiple outputs. const generations = outputs.map((output) => [ { text: output, // Optional additional metadata for the generation generationInfo: { outputCount: 1 }, }, ]); const tokenUsage = { usedTokens: this.n, }; return { generations, llmOutput: { tokenUsage }, }; }}
This will pass the additional returned information in callback events and in the \`streamEvents method:
const llm = new AdvancedCustomLLM({ n: 4 });const eventStream = await llm.streamEvents("I am an LLM", { version: "v1",});for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); }}
{ "event": "on_llm_end", "name": "AdvancedCustomLLM", "run_id": "a883a705-c651-4236-8095-cb515e2d4885", "tags": [], "metadata": {}, "data": { "output": { "generations": [ [ { "text": "I am", "generationInfo": { "outputCount": 1 } } ] ], "llmOutput": { "tokenUsage": { "usedTokens": 4 } } } }}
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to cache chat model responses
](/v0.2/docs/how_to/chat_model_caching)[
Next
How to use few shot examples
](/v0.2/docs/how_to/few_shot_examples)
* [Richer outputs](#richer-outputs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/embed_text | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to embed text data
On this page
How to embed text data
======================
info
Head to [Integrations](/v0.2/docs/integrations/text_embedding) for documentation on built-in integrations with text embedding providers.
Prerequisites
This guide assumes familiarity with the following concepts:
* [Embeddings](/v0.2/docs/concepts/#embedding-models)
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
Below is an example of how to use the OpenAI embeddings. Embeddings occasionally have different embedding methods for queries versus documents, so the embedding class exposes a `embedQuery` and `embedDocuments` method.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Get started[](#get-started-1 "Direct link to Get started")
-----------------------------------------------------------
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings();
Embed queries[](#embed-queries "Direct link to Embed queries")
---------------------------------------------------------------
const res = await embeddings.embedQuery("Hello world");/*[ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, ... 1511 more items]*/
Embed documents[](#embed-documents "Direct link to Embed documents")
---------------------------------------------------------------------
const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);/*[ [ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, ... 1511 more items ] [ -0.009446913, -0.013253193, 0.013174579, 0.0057552797, -0.038993083, 0.0077763423, -0.0260478, -0.0114384955, -0.0022683728, -0.016509168, 0.041797023, 0.01787183, 0.00552271, -0.0049789557, 0.018146982, -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, ... 1511 more items ]]*/
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to use embeddings models with queries and text.
Next, check out how to [avoid excessively recomputing embeddings with caching](/v0.2/docs/how_to/caching_embeddings), or the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream chat model responses
](/v0.2/docs/how_to/chat_streaming)[
Next
How to use few shot examples in chat models
](/v0.2/docs/how_to/few_shot_examples_chat)
* [Get started](#get-started)
* [Get started](#get-started-1)
* [Embed queries](#embed-queries)
* [Embed documents](#embed-documents)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/message_history | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add message history
On this page
How to add message history
==========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Configuring chain parameters at runtime](/v0.2/docs/how_to/binding)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Chat Messages](/v0.2/docs/concepts/#message-types)
The `RunnableWithMessageHistory` lets us add message history to certain types of chains.
Specifically, it can be used for any Runnable that takes as input one of
* a sequence of [`BaseMessages`](/v0.2/docs/concepts/#message-types)
* a dict with a key that takes a sequence of `BaseMessage`
* a dict with a key that takes the latest message(s) as a string or sequence of `BaseMessage`, and a separate key that takes historical messages
And returns as output one of
* a string that can be treated as the contents of an `AIMessage`
* a sequence of `BaseMessage`
* a dict with a key that contains a sequence of `BaseMessage`
Let's take a look at some examples to see how it works.
Setup[](#setup "Direct link to Setup")
---------------------------------------
We'll use Upstash to store our chat message histories and Anthropic's claude-2 model so we'll need to install the following dependencies:
* npm
* Yarn
* pnpm
npm install @langchain/anthropic @langchain/community @upstash/redis
yarn add @langchain/anthropic @langchain/community @upstash/redis
pnpm add @langchain/anthropic @langchain/community @upstash/redis
You'll need to set environment variables for `ANTHROPIC_API_KEY` and grab your Upstash REST url and secret token.
### [LangSmith](https://smith.langchain.com/)[](#langsmith "Direct link to langsmith")
LangSmith is especially useful for something like message history injection, where it can be hard to otherwise understand what the inputs are to various parts of the chain.
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to uncoment the below and set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="<your-api-key>"
Let's create a simple runnable that takes a dict as input and returns a `BaseMessage`.
In this case the `"question"` key in the input represents our input message, and the `"history"` key is where our historical messages will be injected.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";import { UpstashRedisChatMessageHistory } from "@langchain/community/stores/message/upstash_redis";// For demos, you can also use an in-memory store:// import { ChatMessageHistory } from "langchain/stores/message/in_memory";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You're an assistant who's good at {ability}"], new MessagesPlaceholder("history"), ["human", "{question}"],]);const chain = prompt.pipe( new ChatAnthropic({ model: "claude-3-sonnet-20240229" }));
### Adding message history[](#adding-message-history "Direct link to Adding message history")
To add message history to our original chain we wrap it in the `RunnableWithMessageHistory` class.
Crucially, we also need to define a `getMessageHistory()` method that takes a `sessionId` string and based on it returns a `BaseChatMessageHistory`. Given the same input, this method should return an equivalent output.
In this case, we'll also want to specify `inputMessagesKey` (the key to be treated as the latest input message) and `historyMessagesKey` (the key to add historical messages to).
import { RunnableWithMessageHistory } from "@langchain/core/runnables";const chainWithHistory = new RunnableWithMessageHistory({ runnable: chain, getMessageHistory: (sessionId) => new UpstashRedisChatMessageHistory({ sessionId, config: { url: process.env.UPSTASH_REDIS_REST_URL!, token: process.env.UPSTASH_REDIS_REST_TOKEN!, }, }), inputMessagesKey: "question", historyMessagesKey: "history",});
Invoking with config[](#invoking-with-config "Direct link to Invoking with config")
------------------------------------------------------------------------------------
Whenever we call our chain with message history, we need to include an additional config object that contains the `session_id`
{ configurable: { sessionId: "<SESSION_ID>"; }}
Given the same configuration, our chain should be pulling from the same chat message history.
const result = await chainWithHistory.invoke( { ability: "math", question: "What does cosine mean?", }, { configurable: { sessionId: "foobarbaz", }, });console.log(result);/* AIMessage { content: 'Cosine refers to one of the basic trigonometric functions. Specifically:\n' + '\n' + '- Cosine is one of the three main trigonometric functions, along with sine and tangent. It is often abbreviated as cos.\n' + '\n' + '- For a right triangle with sides a, b, and c (where c is the hypotenuse), cosine represents the ratio of the length of the adjacent side (a) to the length of the hypotenuse (c). So cos(A) = a/c, where A is the angle opposite side a.\n' + '\n' + '- On the Cartesian plane, cosine represents the x-coordinate of a point on the unit circle for a given angle. So if you take an angle θ on the unit circle, the cosine of θ gives you the x-coordinate of where the terminal side of that angle intersects the circle.\n' + '\n' + '- The cosine function has a periodic waveform that oscillates between 1 and -1. Its graph forms a cosine wave.\n' + '\n' + 'So in essence, cosine helps relate an angle in a right triangle to the ratio of two of its sides. Along with sine and tangent, it is foundational to trigonometry and mathematical modeling of periodic functions.', name: undefined, additional_kwargs: { id: 'msg_01QnnAkKEz7WvhJrwLWGbLBm', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null } }*/const result2 = await chainWithHistory.invoke( { ability: "math", question: "What's its inverse?", }, { configurable: { sessionId: "foobarbaz", }, });console.log(result2);/* AIMessage { content: 'The inverse of the cosine function is the arcsine or inverse sine function, often written as sin−1(x) or sin^{-1}(x).\n' + '\n' + 'Some key properties of the inverse cosine function:\n' + '\n' + '- It accepts values between -1 and 1 as inputs and returns angles from 0 to π radians (0 to 180 degrees). This is the inverse of the regular cosine function, which takes angles and returns the cosine ratio.\n' + '\n' + '- It is also called cos−1(x) or cos^{-1}(x) (read as "cosine inverse of x").\n' + '\n' + '- The notation sin−1(x) is usually preferred over cos−1(x) since it relates more directly to the unit circle definition of cosine. sin−1(x) gives the angle whose sine equals x.\n' + '\n' + '- The arcsine function is one-to-one on the domain [-1, 1]. This means every output angle maps back to exactly one input ratio x. This one-to-one mapping is what makes it the mathematical inverse of cosine.\n' + '\n' + 'So in summary, arcsine or inverse sine, written as sin−1(x) or sin^{-1}(x), gives you the angle whose cosine evaluates to the input x, undoing the cosine function. It is used throughout trigonometry and calculus.', additional_kwargs: { id: 'msg_01PYRhpoUudApdJvxug6R13W', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null } }*/
tip
[Langsmith trace](https://smith.langchain.com/public/50377a89-d0b8-413b-8cd7-8e6618835e00/r)
Looking at the Langsmith trace for the second call, we can see that when constructing the prompt, a "history" variable has been injected which is a list of two messages (our first input and first output).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to get log probabilities
](/v0.2/docs/how_to/logprobs)[
Next
How to generate multiple embeddings per document
](/v0.2/docs/how_to/multi_vector)
* [Setup](#setup)
* [LangSmith](#langsmith)
* [Adding message history](#adding-message-history)
* [Invoking with config](#invoking-with-config)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/few_shot_examples_chat | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use few shot examples in chat models
On this page
How to use few shot examples in chat models
===========================================
This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.
There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotChatMessagePromptTemplate.html) as a flexible starting point, and you can modify or replace them as you see fit.
The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.
**Note:** The following code examples are for chat models only, since `FewShotChatMessagePromptTemplates` are designed to output formatted [chat messages](/v0.2/docs/concepts/#message-types) rather than pure strings. For similar few-shot prompt examples for pure string templates compatible with completion models (LLMs), see the [few-shot prompt templates](/v0.2/docs/how_to/few_shot_examples/) guide.
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Example selectors](/v0.2/docs/concepts/#example-selectors)
* [Chat models](/v0.2/docs/concepts/#chat-model)
* [Vectorstores](/v0.2/docs/concepts/#vectorstores)
Fixed Examples[](#fixed-examples "Direct link to Fixed Examples")
------------------------------------------------------------------
The most basic (and common) few-shot prompting technique is to use fixed prompt examples. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.
The basic components of the template are: - `examples`: An array of object examples to include in the final prompt. - `examplePrompt`: converts each example into 1 or more messages through its [`formatMessages`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotChatMessagePromptTemplate.html#formatMessages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
Below is a simple demonstration. First, define the examples you’d like to include:
import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "@langchain/core/prompts";const examples = [ { input: "2+2", output: "4" }, { input: "2+3", output: "5" },];
Next, assemble them into the few-shot prompt template.
// This is a prompt template used to format each individual example.const examplePrompt = ChatPromptTemplate.fromMessages([ ["human", "{input}"], ["ai", "{output}"],]);const fewShotPrompt = new FewShotChatMessagePromptTemplate({ examplePrompt, examples, inputVariables: [], // no input variables});const result = await fewShotPrompt.invoke({});console.log(result.toChatMessages());
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }]
Finally, we assemble the final prompt as shown below, passing `fewShotPrompt` directly into the `fromMessages` factory method, and use it with a model:
const finalPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a wondrous wizard of math."], fewShotPrompt, ["human", "{input}"],]);
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
const chain = finalPrompt.pipe(model);await chain.invoke({ input: "What's the square of a triangle?" });
AIMessage { lc_serializable: true, lc_kwargs: { content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 23, promptTokens: 52, totalTokens: 75 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
Dynamic few-shot prompting[](#dynamic-few-shot-prompting "Direct link to Dynamic few-shot prompting")
------------------------------------------------------------------------------------------------------
Sometimes you may want to select only a few examples from your overall set to show based on the input. For this, you can replace the `examples` passed into `FewShotChatMessagePromptTemplate` with an `exampleSelector`. The other components remain the same as above! Our dynamic few-shot prompt template would look like:
* `exampleSelector`: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the [BaseExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.BaseExampleSelector.html) interface. A common example is the vectorstore-backed [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html)
* `examplePrompt`: convert each example into 1 or more messages through its [`formatMessages`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotChatMessagePromptTemplate.html#formatMessages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
These once again can be composed with other messages and chat templates to assemble your final prompt.
Let’s walk through an example with the `SemanticSimilarityExampleSelector`. Since this implementation uses a vectorstore to select examples based on semantic similarity, we will want to first populate the store. Since the basic idea here is that we want to search for and return examples most similar to the text input, we embed the `values` of our prompt examples rather than considering the keys:
import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const examples = [ { input: "2+2", output: "4" }, { input: "2+3", output: "5" }, { input: "2+4", output: "6" }, { input: "What did the cow say to the moon?", output: "nothing at all" }, { input: "Write me a poem about the moon", output: "One for the moon, and one for me, who are we to talk about the moon?", },];const toVectorize = examples.map( (example) => `${example.input} ${example.output}`);const embeddings = new OpenAIEmbeddings();const vectorStore = await MemoryVectorStore.fromTexts( toVectorize, examples, embeddings);
### Create the `exampleSelector`[](#create-the-exampleselector "Direct link to create-the-exampleselector")
With a vectorstore created, we can create the `exampleSelector`. Here we will call it in isolation, and set `k` on it to only fetch the two example closest to the input.
const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore, k: 2,});// The prompt template will load examples by passing the input do the `select_examples` methodawait exampleSelector.selectExamples({ input: "horse" });
[ { input: "What did the cow say to the moon?", output: "nothing at all" }, { input: "2+4", output: "6" }]
### Create prompt template[](#create-prompt-template "Direct link to Create prompt template")
We now assemble the prompt template, using the `exampleSelector` created above.
import { ChatPromptTemplate, FewShotChatMessagePromptTemplate,} from "@langchain/core/prompts";// Define the few-shot prompt.const fewShotPrompt = new FewShotChatMessagePromptTemplate({ // The input variables select the values to pass to the example_selector inputVariables: ["input"], exampleSelector, // Define how ech example will be formatted. // In this case, each example will become 2 messages: // 1 human, and 1 AI examplePrompt: ChatPromptTemplate.fromMessages([ ["human", "{input}"], ["ai", "{output}"], ]),});const results = await fewShotPrompt.invoke({ input: "What's 3+3?" });console.log(results.toChatMessages());
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }]
And we can pass this few-shot chat message prompt template into another chat prompt template:
const finalPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a wondrous wizard of math."], fewShotPrompt, ["human", "{input}"],]);const result = await fewShotPrompt.invoke({ input: "What's 3+3?" });console.log(result);
ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+3", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "5", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "5", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "2+2", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "4", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "4", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ]}
### Use with an chat model[](#use-with-an-chat-model "Direct link to Use with an chat model")
Finally, you can connect your model to the few-shot prompt.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
const chain = finalPrompt.pipe(model);await chain.invoke({ input: "What's 3+3?" });
AIMessage { lc_serializable: true, lc_kwargs: { content: "6", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "6", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 1, promptTokens: 51, totalTokens: 52 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to add few-shot examples to your chat prompts.
Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on [few shotting with text completion models](/v0.2/docs/how_to/few_shot_examples), or the other [example selector how-to guides](/v0.2/docs/how_to/example_selectors/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to embed text data
](/v0.2/docs/how_to/embed_text)[
Next
How to cache model responses
](/v0.2/docs/how_to/llm_caching)
* [Fixed Examples](#fixed-examples)
* [Dynamic few-shot prompting](#dynamic-few-shot-prompting)
* [Create the `exampleSelector`](#create-the-exampleselector)
* [Create prompt template](#create-prompt-template)
* [Use with an chat model](#use-with-an-chat-model)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/multi_vector | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to generate multiple embeddings per document
On this page
How to generate multiple embeddings per document
================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Text splitters](/v0.2/docs/concepts/#text-splitters)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
Embedding different representations of an original document, then returning the original document when any of the representations result in a search hit, can allow you to tune and improve your retrieval performance. LangChain has a base [`MultiVectorRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) designed to do just this!
A lot of the complexity lies in how to create the multiple vectors per document. This guide covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.
Some methods to create multiple vectors per document include:
* smaller chunks: split a document into smaller chunks, and embed those (e.g. the [`ParentDocumentRetriever`](/v0.2/docs/how_to/parent_document_retriever))
* summary: create a summary for each document, embed that along with (or instead of) the document
* hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document
Note that this also enables another method of adding embeddings - manually. This is great because you can explicitly add questions or queries that should lead to a document being recovered, giving you more control.
Smaller chunks[](#smaller-chunks "Direct link to Smaller chunks")
------------------------------------------------------------------
Often times it can be useful to retrieve larger chunks of information, but embed smaller chunks. This allows for embeddings to capture the semantic meaning as closely as possible, but for as much context as possible to be passed downstream. NOTE: this is what the ParentDocumentRetriever does. Here we show what is going on under the hood.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import * as uuid from "uuid";import { MultiVectorRetriever } from "langchain/retrievers/multi_vector";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { InMemoryStore } from "@langchain/core/stores";import { TextLoader } from "langchain/document_loaders/fs/text";import { Document } from "@langchain/core/documents";const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10000, chunkOverlap: 20,});const docs = await splitter.splitDocuments(parentDocuments);const idKey = "doc_id";const docIds = docs.map((_) => uuid.v4());const childSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 400, chunkOverlap: 0,});const subDocs = [];for (let i = 0; i < docs.length; i += 1) { const childDocs = await childSplitter.splitDocuments([docs[i]]); const taggedChildDocs = childDocs.map((childDoc) => { // eslint-disable-next-line no-param-reassign childDoc.metadata[idKey] = docIds[i]; return childDoc; }); subDocs.push(...taggedChildDocs);}// The byteStore to use to store the original chunksconst byteStore = new InMemoryStore<Uint8Array>();// The vectorstore to use to index the child chunksconst vectorstore = await FaissStore.fromDocuments( subDocs, new OpenAIEmbeddings());const retriever = new MultiVectorRetriever({ vectorstore, byteStore, idKey, // Optional `k` parameter to search for more child documents in VectorStore. // Note that this does not exactly correspond to the number of final (parent) documents // retrieved, as multiple child documents can point to the same parent. childK: 20, // Optional `k` parameter to limit number of final, parent documents returned from this // retriever and sent to LLM. This is an upper-bound, and the final count may be lower than this. parentK: 5,});const keyValuePairs: [string, Document][] = docs.map((originalDoc, i) => [ docIds[i], originalDoc,]);// Use the retriever to add the original chunks to the document storeawait retriever.docstore.mset(keyValuePairs);// Vectorstore alone retrieves the small chunksconst vectorstoreResult = await retriever.vectorstore.similaritySearch( "justice breyer");console.log(vectorstoreResult[0].pageContent.length);/* 390*/// Retriever returns larger resultconst retrieverResult = await retriever.invoke("justice breyer");console.log(retrieverResult[0].pageContent.length);/* 9770*/
#### API Reference:
* [MultiVectorRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) from `langchain/retrievers/multi_vector`
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
Summary[](#summary "Direct link to Summary")
---------------------------------------------
Oftentimes a summary may be able to distill more accurately what a chunk is about, leading to better retrieval. Here we show how to create summaries, and then embed those.
import * as uuid from "uuid";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MultiVectorRetriever } from "langchain/retrievers/multi_vector";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { InMemoryStore } from "@langchain/core/stores";import { TextLoader } from "langchain/document_loaders/fs/text";import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";import { Document } from "@langchain/core/documents";const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10000, chunkOverlap: 20,});const docs = await splitter.splitDocuments(parentDocuments);const chain = RunnableSequence.from([ { content: (doc: Document) => doc.pageContent }, PromptTemplate.fromTemplate(`Summarize the following document:\n\n{content}`), new ChatOpenAI({ maxRetries: 0, }), new StringOutputParser(),]);const summaries = await chain.batch(docs, { maxConcurrency: 5,});const idKey = "doc_id";const docIds = docs.map((_) => uuid.v4());const summaryDocs = summaries.map((summary, i) => { const summaryDoc = new Document({ pageContent: summary, metadata: { [idKey]: docIds[i], }, }); return summaryDoc;});// The byteStore to use to store the original chunksconst byteStore = new InMemoryStore<Uint8Array>();// The vectorstore to use to index the child chunksconst vectorstore = await FaissStore.fromDocuments( summaryDocs, new OpenAIEmbeddings());const retriever = new MultiVectorRetriever({ vectorstore, byteStore, idKey,});const keyValuePairs: [string, Document][] = docs.map((originalDoc, i) => [ docIds[i], originalDoc,]);// Use the retriever to add the original chunks to the document storeawait retriever.docstore.mset(keyValuePairs);// We could also add the original chunks to the vectorstore if we wish// const taggedOriginalDocs = docs.map((doc, i) => {// doc.metadata[idKey] = docIds[i];// return doc;// });// retriever.vectorstore.addDocuments(taggedOriginalDocs);// Vectorstore alone retrieves the small chunksconst vectorstoreResult = await retriever.vectorstore.similaritySearch( "justice breyer");console.log(vectorstoreResult[0].pageContent.length);/* 1118*/// Retriever returns larger resultconst retrieverResult = await retriever.invoke("justice breyer");console.log(retrieverResult[0].pageContent.length);/* 9770*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MultiVectorRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) from `langchain/retrievers/multi_vector`
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
Hypothetical queries[](#hypothetical-queries "Direct link to Hypothetical queries")
------------------------------------------------------------------------------------
An LLM can also be used to generate a list of hypothetical questions that could be asked of a particular document. These questions can then be embedded and used to retrieve the original document:
import * as uuid from "uuid";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MultiVectorRetriever } from "langchain/retrievers/multi_vector";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { InMemoryStore } from "@langchain/core/stores";import { TextLoader } from "langchain/document_loaders/fs/text";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { Document } from "@langchain/core/documents";import { JsonKeyOutputFunctionsParser } from "@langchain/core/output_parsers/openai_functions";const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10000, chunkOverlap: 20,});const docs = await splitter.splitDocuments(parentDocuments);const functionsSchema = [ { name: "hypothetical_questions", description: "Generate hypothetical questions", parameters: { type: "object", properties: { questions: { type: "array", items: { type: "string", }, }, }, required: ["questions"], }, },];const functionCallingModel = new ChatOpenAI({ maxRetries: 0, model: "gpt-4",}).bind({ functions: functionsSchema, function_call: { name: "hypothetical_questions" },});const chain = RunnableSequence.from([ { content: (doc: Document) => doc.pageContent }, PromptTemplate.fromTemplate( `Generate a list of 3 hypothetical questions that the below document could be used to answer:\n\n{content}` ), functionCallingModel, new JsonKeyOutputFunctionsParser<string[]>({ attrName: "questions" }),]);const hypotheticalQuestions = await chain.batch(docs, { maxConcurrency: 5,});const idKey = "doc_id";const docIds = docs.map((_) => uuid.v4());const hypotheticalQuestionDocs = hypotheticalQuestions .map((questionArray, i) => { const questionDocuments = questionArray.map((question) => { const questionDocument = new Document({ pageContent: question, metadata: { [idKey]: docIds[i], }, }); return questionDocument; }); return questionDocuments; }) .flat();// The byteStore to use to store the original chunksconst byteStore = new InMemoryStore<Uint8Array>();// The vectorstore to use to index the child chunksconst vectorstore = await FaissStore.fromDocuments( hypotheticalQuestionDocs, new OpenAIEmbeddings());const retriever = new MultiVectorRetriever({ vectorstore, byteStore, idKey,});const keyValuePairs: [string, Document][] = docs.map((originalDoc, i) => [ docIds[i], originalDoc,]);// Use the retriever to add the original chunks to the document storeawait retriever.docstore.mset(keyValuePairs);// We could also add the original chunks to the vectorstore if we wish// const taggedOriginalDocs = docs.map((doc, i) => {// doc.metadata[idKey] = docIds[i];// return doc;// });// retriever.vectorstore.addDocuments(taggedOriginalDocs);// Vectorstore alone retrieves the small chunksconst vectorstoreResult = await retriever.vectorstore.similaritySearch( "justice breyer");console.log(vectorstoreResult[0].pageContent);/* "What measures will be taken to crack down on corporations overcharging American businesses and consumers?"*/// Retriever returns larger resultconst retrieverResult = await retriever.invoke("justice breyer");console.log(retrieverResult[0].pageContent.length);/* 9770*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MultiVectorRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html) from `langchain/retrievers/multi_vector`
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [JsonKeyOutputFunctionsParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers_openai_functions.JsonKeyOutputFunctionsParser.html) from `@langchain/core/output_parsers/openai_functions`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned a few ways to generate multiple embeddings per document.
Next, check out the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add message history
](/v0.2/docs/how_to/message_history)[
Next
How to generate multiple queries to retrieve data for
](/v0.2/docs/how_to/multiple_queries)
* [Smaller chunks](#smaller-chunks)
* [Summary](#summary)
* [Hypothetical queries](#hypothetical-queries)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/few_shot_examples | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use few shot examples
On this page
How to use few shot examples
============================
In this guide, we’ll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. Providing the LLM with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.
A few-shot prompt template can be constructed from either a set of examples, or from an [Example Selector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.BaseExampleSelector.html) class responsible for choosing a subset of examples from the defined set.
This guide will cover few-shotting with string prompt templates. For a guide on few-shotting with chat messages for chat models, see [here](/v0.2/docs/how_to/few_shot_examples_chat/).
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Example selectors](/v0.2/docs/concepts/#example-selectors)
* [LLMs](/v0.2/docs/concepts/#llms)
* [Vectorstores](/v0.2/docs/concepts/#vectorstores)
Create a formatter for the few-shot examples[](#create-a-formatter-for-the-few-shot-examples "Direct link to Create a formatter for the few-shot examples")
------------------------------------------------------------------------------------------------------------------------------------------------------------
Configure a formatter that will format the few-shot examples into a string. This formatter should be a `PromptTemplate` object.
import { PromptTemplate } from "@langchain/core/prompts";const examplePrompt = PromptTemplate.fromTemplate( "Question: {question}\n{answer}");
Creating the example set[](#creating-the-example-set "Direct link to Creating the example set")
------------------------------------------------------------------------------------------------
Next, we’ll create a list of few-shot examples. Each example should be a dictionary representing an example input to the formatter prompt we defined above.
const examples = [ { question: "Who lived longer, Muhammad Ali or Alan Turing?", answer: ` Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali `, }, { question: "When was the founder of craigslist born?", answer: ` Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 `, }, { question: "Who was the maternal grandfather of George Washington?", answer: ` Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball `, }, { question: "Are both the directors of Jaws and Casino Royale from the same country?", answer: ` Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No `, },];
### Pass the examples and formatter to `FewShotPromptTemplate`[](#pass-the-examples-and-formatter-to-fewshotprompttemplate "Direct link to pass-the-examples-and-formatter-to-fewshotprompttemplate")
Finally, create a [`FewShotPromptTemplate`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) object. This object takes in the few-shot examples and the formatter for the few-shot examples. When this `FewShotPromptTemplate` is formatted, it formats the passed examples using the `examplePrompt`, then and adds them to the final prompt before `suffix`:
import { FewShotPromptTemplate } from "@langchain/core/prompts";const prompt = new FewShotPromptTemplate({ examples, examplePrompt, suffix: "Question: {input}", inputVariables: ["input"],});const formatted = await prompt.format({ input: "Who was the father of Mary Ball Washington?",});console.log(formatted.toString());
Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad AliQuestion: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph BallQuestion: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: NoQuestion: Who was the father of Mary Ball Washington?
By providing the model with examples like this, we can guide the model to a better response.
Using an example selector[](#using-an-example-selector "Direct link to Using an example selector")
---------------------------------------------------------------------------------------------------
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the `FewShotPromptTemplate` object, we will feed them into an implementation of `ExampleSelector` called [`SemanticSimilarityExampleSelector`](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) instance. This class selects few-shot examples from the initial set based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search.
To show what it looks like, let’s initialize an instance and call it in isolation:
Set your OpenAI API key for the embeddings model
export OPENAI_API_KEY="..."
import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( // This is the list of examples available to select from. examples, // This is the embedding class used to produce embeddings which are used to measure semantic similarity. new OpenAIEmbeddings(), // This is the VectorStore class that is used to store the embeddings and do a similarity search over. MemoryVectorStore, { // This is the number of examples to produce. k: 1, });// Select the most similar example to the input.const question = "Who was the father of Mary Ball Washington?";const selectedExamples = await exampleSelector.selectExamples({ question });console.log(`Examples most similar to the input: ${question}`);for (const example of selectedExamples) { console.log("\n"); console.log( Object.entries(example) .map(([k, v]) => `${k}: ${v}`) .join("\n") );}
Examples most similar to the input: Who was the father of Mary Ball Washington?question: Who was the maternal grandfather of George Washington?answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball
Now, let’s create a `FewShotPromptTemplate` object. This object takes in the example selector and the formatter prompt for the few-shot examples.
const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, suffix: "Question: {input}", inputVariables: ["input"],});const formatted = await prompt.invoke({ input: "Who was the father of Mary Ball Washington?",});console.log(formatted.toString());
Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph BallQuestion: Who was the father of Mary Ball Washington?
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to add few-shot examples to your prompts.
Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on [few shotting with chat models](/v0.2/docs/how_to/few_shot_examples_chat), or the other [example selector how-to guides](/v0.2/docs/how_to/example_selectors/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create a custom LLM class
](/v0.2/docs/how_to/custom_llm)[
Next
How to use output parsers to parse an LLM response into structured format
](/v0.2/docs/how_to/output_parser_structured)
* [Create a formatter for the few-shot examples](#create-a-formatter-for-the-few-shot-examples)
* [Creating the example set](#creating-the-example-set)
* [Pass the examples and formatter to `FewShotPromptTemplate`](#pass-the-examples-and-formatter-to-fewshotprompttemplate)
* [Using an example selector](#using-an-example-selector)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/structured_output | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to return structured data from a model
On this page
How to return structured data from a model
==========================================
It is often useful to have a model return output that matches some specific schema. One common use-case is extracting data from arbitrary text to insert into a traditional database or use with some other downstrem system. This guide will show you a few different strategies you can use to do this.
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
The `.withStructuredOutput()` method[](#the-.withstructuredoutput-method "Direct link to the-.withstructuredoutput-method")
----------------------------------------------------------------------------------------------------------------------------
There are several strategies that models can use under the hood. For some of the most popular model providers, including [Anthropic](/v0.2/docs/integrations/platforms/anthropic/), [Google VertexAI](/v0.2/docs/integrations/platforms/google/), [Mistral](/v0.2/docs/integrations/chat/mistral/), and [OpenAI](/v0.2/docs/integrations/platforms/openai/) LangChain implements a common interface that abstracts away these strategies called `.withStructuredOutput`.
By invoking this method (and passing in [JSON schema](https://json-schema.org/) or a [Zod schema](https://zod.dev/)) the model will add whatever model parameters + output parsers are necessary to get back structured output matching the requested schema. If the model supports more than one way to do this (e.g., function calling vs JSON mode) - you can configure which method to use by passing into that method.
Let’s look at some examples of this in action! We’ll use Zod to create a simple response schema.
### Pick your chat model:
* OpenAI
* Anthropic
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { z } from "zod";const Joke = z.object({ setup: z.string().describe("The setup of the joke"), punchline: z.string().describe("The punchline to the joke"), rating: z.number().optional().describe("How funny the joke is, from 1 to 10"),});const structuredLlm = model.withStructuredOutput(Joke);await structuredLlm.invoke("Tell me a joke about cats");
{ setup: "Why was the cat sitting on the computer?", punchline: "Because it wanted to keep an eye on the mouse!", rating: 8}
The result is a JSON object.
We can also pass in an OpenAI-style JSON schema dict if you prefer not to use Zod. This object should contain three properties:
* `name`: The name of the schema to output.
* `description`: A high level description of the schema to output.
* `parameters`: The nested details of the schema you want to extract, formatted as a [JSON schema](https://json-schema.org/) dict.
In this case, the response is also a dict:
const structuredLlm = model.withStructuredOutput({ name: "joke", description: "Joke to tell user.", parameters: { title: "Joke", type: "object", properties: { setup: { type: "string", description: "The setup for the joke" }, punchline: { type: "string", description: "The joke's punchline" }, }, required: ["setup", "punchline"], },});await structuredLlm.invoke("Tell me a joke about cats");
{ setup: "Why was the cat sitting on the computer?", punchline: "Because it wanted to keep an eye on the mouse!"}
If you are using JSON Schema, you can take advantage of other more complex schema descriptions to create a similar effect.
You can also use tool calling directly to allow the model to choose between options, if your chosen model supports it. This involves a bit more parsing and setup. See [this how-to guide](/v0.2/docs/how_to/tool_calling/) for more details.
### Specifying the output method (Advanced)[](#specifying-the-output-method-advanced "Direct link to Specifying the output method (Advanced)")
For models that support more than one means of outputting data, you can specify the preferred one like this:
const structuredLlm = model.withStructuredOutput(Joke, { method: "json_schema",});await structuredLlm.invoke( "Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys");
{ setup: "Why was the cat sitting on the computer?", punchline: "To keep an eye on the mouse!"}
In the above example, we use OpenAI’s alternate JSON mode capability along with a more specific prompt.
For specifics about the model you choose, peruse its entry in the [API reference pages](https://v02.api.js.langchain.com/).
Prompting techniques[](#prompting-techniques "Direct link to Prompting techniques")
------------------------------------------------------------------------------------
You can also prompt models to outputting information in a given format. This approach relies on designing good prompts and then parsing the output of the models. This is the only option for models that don’t support `.with_structured_output()` or other built-in approaches.
### Using `JsonOutputParser`[](#using-jsonoutputparser "Direct link to using-jsonoutputparser")
The following example uses the built-in [`JsonOutputParser`](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html) to parse the output of a chat model prompted to match a the given JSON schema. Note that we are adding `format_instructions` directly to the prompt from a method on the parser:
import { JsonOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";type Person = { name: string; height_in_meters: number;};type People = { people: Person[];};const formatInstructions = `Respond only in valid JSON. The JSON object you return should match the following schema:{{ people: [{{ name: "string", height_in_meters: "number" }}] }}Where people is an array of objects, each with a name and height_in_meters field.`;// Set up a parserconst parser = new JsonOutputParser<People>();// Promptconst prompt = await ChatPromptTemplate.fromMessages([ [ "system", "Answer the user query. Wrap the output in `json` tags\n{format_instructions}", ], ["human", "{query}"],]).partial({ format_instructions: formatInstructions,});
Let’s take a look at what information is sent to the model:
const query = "Anna is 23 years old and she is 6 feet tall";console.log((await prompt.format({ query })).toString());
System: Answer the user query. Wrap the output in `json` tagsRespond only in valid JSON. The JSON object you return should match the following schema:{{ people: [{{ name: "string", height_in_meters: "number" }}] }}Where people is an array of objects, each with a name and height_in_meters field.Human: Anna is 23 years old and she is 6 feet tall
And now let’s invoke it:
const chain = prompt.pipe(model).pipe(parser);await chain.invoke({ query });
{ people: [ { name: "Anna", height_in_meters: 1.83 } ] }
For a deeper dive into using output parsers with prompting techniques for structured output, see [this guide](/v0.2/docs/how_to/output_parser_structured).
### Custom Parsing[](#custom-parsing "Direct link to Custom Parsing")
You can also create a custom prompt and parser with [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language), using a plain function to parse the output from the model:
import { AIMessage } from "@langchain/core/messages";import { ChatPromptTemplate } from "@langchain/core/prompts";type Person = { name: string; height_in_meters: number;};type People = { people: Person[];};const schema = `{{ people: [{{ name: "string", height_in_meters: "number" }}] }}`;// Promptconst prompt = await ChatPromptTemplate.fromMessages([ [ "system", `Answer the user query. Output your answer as JSON thatmatches the given schema: \`\`\`json\n{schema}\n\`\`\`.Make sure to wrap the answer in \`\`\`json and \`\`\` tags`, ], ["human", "{query}"],]).partial({ schema,});/** * Custom extractor * * Extracts JSON content from a string where * JSON is embedded between ```json and ``` tags. */const extractJson = (output: AIMessage): Array<People> => { const text = output.content as string; // Define the regular expression pattern to match JSON blocks const pattern = /```json(.*?)```/gs; // Find all non-overlapping matches of the pattern in the string const matches = text.match(pattern); // Process each match, attempting to parse it as JSON try { return ( matches?.map((match) => { // Remove the markdown code block syntax to isolate the JSON string const jsonStr = match.replace(/```json|```/g, "").trim(); return JSON.parse(jsonStr); }) ?? [] ); } catch (error) { throw new Error(`Failed to parse: ${output}`); }};
Here is the prompt sent to the model:
const query = "Anna is 23 years old and she is 6 feet tall";console.log((await prompt.format({ query })).toString());
System: Answer the user query. Output your answer as JSON thatmatches the given schema: ```json{{ people: [{{ name: "string", height_in_meters: "number" }}] }}```.Make sure to wrap the answer in ```json and ``` tagsHuman: Anna is 23 years old and she is 6 feet tall
And here’s what it looks like when we invoke it:
import { RunnableLambda } from "@langchain/core/runnables";const chain = prompt .pipe(model) .pipe(new RunnableLambda({ func: extractJson }));await chain.invoke({ query });
[ { people: [ { name: "Anna", height_in_meters: 1.83 } ] }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you’ve learned a few methods to make a model output structured data.
To learn more, check out the other how-to guides in this section, or the conceptual guide on tool calling.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use output parsers to parse an LLM response into structured format
](/v0.2/docs/how_to/output_parser_structured)[
Next
How to add ad-hoc tool calling capability to LLMs and Chat Models
](/v0.2/docs/how_to/tools_prompting)
* [The `.withStructuredOutput()` method](#the-.withstructuredoutput-method)
* [Specifying the output method (Advanced)](#specifying-the-output-method-advanced)
* [Prompting techniques](#prompting-techniques)
* [Using `JsonOutputParser`](#using-jsonoutputparser)
* [Custom Parsing](#custom-parsing)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/llm_token_usage_tracking | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to track token usage
On this page
How to track token usage
========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LLMs](/v0.2/docs/concepts/#llms)
This notebook goes over how to track your token usage for specific LLM calls. This is only implemented by some providers, including OpenAI.
Here's an example of tracking token usage for a single LLM call via a callback:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const llm = new OpenAI({ model: "gpt-3.5-turbo-instruct", callbacks: [ { handleLLMEnd(output) { console.log(JSON.stringify(output, null, 2)); }, }, ],});await llm.invoke("Tell me a joke.");/* { "generations": [ [ { "text": "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything.", "generationInfo": { "finishReason": "stop", "logprobs": null } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 14, "promptTokens": 5, "totalTokens": 19 } } }*/
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
If this model is passed to a chain or agent that calls it multiple times, it will log an output each time.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now seen how to get token usage for supported LLM providers.
Next, check out the other how-to guides in this section, like [how to implement your own custom LLM](/v0.2/docs/how_to/custom_llm).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to track token usage
](/v0.2/docs/how_to/chat_token_usage_tracking)[
Next
How to pass through arguments from one step to the next
](/v0.2/docs/how_to/passthrough)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/qa_per_user | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do per-user retrieval
On this page
How to do per-user retrieval
============================
Prerequisites
This guide assumes familiarity with the following:
* [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/)
When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see each other’s data. This means that you need to be able to configure your retrieval chain to only retrieve certain information. This generally involves two steps.
**Step 1: Make sure the retriever you are using supports multiple users**
At the moment, there is no unified flag or filter for this in LangChain. Rather, each vectorstore and retriever may have their own, and may be called different things (namespaces, multi-tenancy, etc). For vectorstores, this is generally exposed as a keyword argument that is passed in during `similaritySearch`. By reading the documentation or source code, figure out whether the retriever you are using supports multiple users, and, if so, how to use it.
**Step 2: Add that parameter as a configurable field for the chain**
The LangChain `config` object is passed through to every Runnable. Here you can add any fields you’d like to the `configurable` object. Later, inside the chain we can extract these fields.
**Step 3: Call the chain with that configurable field**
Now, at runtime you can call this chain with configurable field.
Code Example[](#code-example "Direct link to Code Example")
------------------------------------------------------------
Let’s see a concrete example of what this looks like in code. We will use Pinecone for this example.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/pinecone @langchain/openai @pinecone-database/pinecone @langchain/core
yarn add @langchain/pinecone @langchain/openai @pinecone-database/pinecone @langchain/core
pnpm add @langchain/pinecone @langchain/openai @pinecone-database/pinecone @langchain/core
### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI and Pinecone in this example:
OPENAI_API_KEY=your-api-keyPINECONE_API_KEY=your-api-keyPINECONE_INDEX=your-index-name# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";import { Pinecone } from "@pinecone-database/pinecone";import { Document } from "@langchain/core/documents";const embeddings = new OpenAIEmbeddings();const pinecone = new Pinecone();const pineconeIndex = pinecone.Index(Deno.env.get("PINECONE_INDEX"));const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });await vectorStore.addDocuments( [new Document({ pageContent: "i worked at kensho" })], { namespace: "harrison" });await vectorStore.addDocuments( [new Document({ pageContent: "i worked at facebook" })], { namespace: "ankush" });
[ "77b8f174-9d89-4c6c-b2ab-607fe3913b2d" ]
The pinecone kwarg for `namespace` can be used to separate documents
// This will only get documents for Ankushconst ankushRetriever = vectorStore.asRetriever({ filter: { namespace: "ankush", },});await ankushRetriever.invoke("where did i work?");
[ Document { pageContent: "i worked at facebook", metadata: {} } ]
// This will only get documents for Harrisonconst harrisonRetriever = vectorStore.asRetriever({ filter: { namespace: "harrison", },});await harrisonRetriever.invoke("where did i work?");
[ Document { pageContent: "i worked at kensho", metadata: {} } ]
We can now create the chain that we will use to perform question-answering.
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableBinding, RunnableLambda, RunnablePassthrough,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0,});
We can now create the chain using our configurable retriever. It is configurable because we can define any object which will be passed to the chain. From there, we extract the configurable object and pass it to the vectorstore.
import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const chain = RunnableSequence.from([ RunnablePassthrough.assign({ context: async (input, config) => { if (!config || !("configurable" in config)) { throw new Error("No config"); } const { configurable } = config; const documents = await vectorStore .asRetriever(configurable) .invoke(input.question, config); return documents.map((doc) => doc.pageContent).join("\n\n"); }, }), prompt, model, new StringOutputParser(),]);
We can now invoke the chain with configurable options. `search_kwargs` is the id of the configurable field. The value is the search kwargs to use for Pinecone
await chain.invoke( { question: "where did the user work?" }, { configurable: { filter: { namespace: "harrison" } } });
"The user worked at Kensho."
await chain.invoke( { question: "where did the user work?" }, { configurable: { filter: { namespace: "ankush" } } });
"The user worked at Facebook."
For more vector store implementations that can support multiple users, please refer to specific pages, such as [Milvus](/v0.2/docs/integrations/vectorstores/milvus).
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now seen one approach for supporting retrieval with data from multiple users.
Next, check out some of the other how-to guides on RAG, such as [returning sources](/v0.2/docs/how_to/qa_sources).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create a custom chat model class
](/v0.2/docs/how_to/custom_chat)[
Next
How to track token usage
](/v0.2/docs/how_to/chat_token_usage_tracking)
* [Code Example](#code-example)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Set environment variables](#set-environment-variables)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/output_parser_structured | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use output parsers to parse an LLM response into structured format
On this page
How to use output parsers to parse an LLM response into structured format
=========================================================================
Language models output text. But there are times where you want to get more structured information than just text back. While some model providers support [built-in ways to return structured output](/v0.2/docs/how_to/structured_output), not all do.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
* “Get format instructions”: A method which returns a string containing instructions for how the output of a language model should be formatted.
* “Parse”: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
* “Parse with prompt”: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
### LCEL[](#lcel "Direct link to LCEL")
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { RunnableSequence } from "@langchain/core/runnables";import { StructuredOutputParser } from "@langchain/core/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website.",});const chain = RunnableSequence.from([ PromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), model, parser,]);console.log(parser.getFormatInstructions());
You must format your output as a JSON value that adheres to a given "JSON Schema" instance."JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```json{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```
const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});console.log(response);
{ answer: "Paris", source: "https://en.wikipedia.org/wiki/Paris" }
Output parsers implement the [Runnable interface](/v0.2/docs/how_to/#langchain-expression-language-lcel), the basic building block of the [LangChain Expression Language (LCEL)](/v0.2/docs/how_to/#langchain-expression-language-lcel). This means they support `invoke`, `stream`, `batch`, `streamLog` calls.
Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type.
While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.
The `JsonOutputParser` for example can stream through partial outputs:
const stream = await chain.stream({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});for await (const s of stream) { console.log(s);}
{ answer: "The capital of France is Paris.", source: "https://en.wikipedia.org/wiki/Paris"}
import { JsonOutputParser } from "@langchain/core/output_parsers";const jsonPrompt = PromptTemplate.fromTemplate( "Return a JSON object with an `answer` key that answers the following question: {question}");const jsonParser = new JsonOutputParser();const jsonChain = jsonPrompt.pipe(model).pipe(jsonParser);
for await (const s of await jsonChain.stream({ question: "Who invented the microscope?",})) { console.log(s);}
{}{ answer: "" }{ answer: "The" }{ answer: "The microscope" }{ answer: "The microscope was" }{ answer: "The microscope was invented" }{ answer: "The microscope was invented by" }{ answer: "The microscope was invented by Zach" }{ answer: "The microscope was invented by Zacharias" }{ answer: "The microscope was invented by Zacharias J" }{ answer: "The microscope was invented by Zacharias Jans" }{ answer: "The microscope was invented by Zacharias Janssen" }{ answer: "The microscope was invented by Zacharias Janssen and" }{ answer: "The microscope was invented by Zacharias Janssen and his" }{ answer: "The microscope was invented by Zacharias Janssen and his father"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in the"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16th"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16th century"}{ answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16th century."}
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use few shot examples
](/v0.2/docs/how_to/few_shot_examples)[
Next
How to return structured data from a model
](/v0.2/docs/how_to/structured_output)
* [Get started](#get-started)
* [LCEL](#lcel)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/chat_token_usage_tracking | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to track token usage
On this page
How to track token usage
========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
This notebook goes over how to track your token usage for specific calls.
Using `AIMessage.response_metadata`[](#using-aimessageresponse_metadata "Direct link to using-aimessageresponse_metadata")
---------------------------------------------------------------------------------------------------------------------------
A number of model providers return token usage information as part of the chat generation response. When available, this is included in the `AIMessage.response_metadata` field. Here's an example with OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { tokenUsage: { completionTokens: 15, promptTokens: 12, totalTokens: 27 }, finish_reason: 'stop' }*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
And here's an example with Anthropic:
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { id: 'msg_017Mgz6HdgNbi3cwL1LNB9Dw', model: 'claude-3-sonnet-20240229', stop_sequence: null, usage: { input_tokens: 12, output_tokens: 30 }, stop_reason: 'end_turn' }*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Using callbacks[](#using-callbacks "Direct link to Using callbacks")
---------------------------------------------------------------------
You can also use the `handleLLMEnd` callback to get the full output from the LLM, including token usage for supported models. Here's an example of how you could do that:
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo", callbacks: [ { handleLLMEnd(output) { console.log(JSON.stringify(output, null, 2)); }, }, ],});await chatModel.invoke("Tell me a joke.");/* { "generations": [ [ { "text": "Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!", "tool_calls": [], "invalid_tool_calls": [], "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 17, "promptTokens": 12, "totalTokens": 29 }, "finish_reason": "stop" } } }, "generationInfo": { "finish_reason": "stop" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 17, "promptTokens": 12, "totalTokens": 29 } } }*/
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now seen a few examples of how to track chat model token usage for supported providers.
Next, check out the other how-to guides on chat models in this section, like [how to get a model to return structured output](/v0.2/docs/how_to/structured_output) or [how to add caching to your chat models](/v0.2/docs/how_to/chat_model_caching).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do per-user retrieval
](/v0.2/docs/how_to/qa_per_user)[
Next
How to track token usage
](/v0.2/docs/how_to/llm_token_usage_tracking)
* [Using `AIMessage.response_metadata`](#using-aimessageresponse_metadata)
* [Using callbacks](#using-callbacks)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/custom_chat | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a custom chat model class
On this page
How to create a custom chat model class
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
This notebook goes over how to create a custom chat model wrapper, in case you want to use your own chat model or a different wrapper than one that is directly supported in LangChain.
There are a few required things that a chat model needs to implement after extending the [`SimpleChatModel` class](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.SimpleChatModel.html):
* A `_call` method that takes in a list of messages and call options (which includes things like `stop` sequences), and returns a string.
* A `_llmType` method that returns a string. Used for logging purposes only.
You can also implement the following optional method:
* A `_streamResponseChunks` method that returns an `AsyncGenerator` and yields [`ChatGenerationChunks`](https://v02.api.js.langchain.com/classes/langchain_core_outputs.ChatGenerationChunk.html). This allows the LLM to support streaming outputs.
Let's implement a very simple custom chat model that just echoes back the first `n` characters of the input.
import { SimpleChatModel, type BaseChatModelParams,} from "@langchain/core/language_models/chat_models";import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";import { AIMessageChunk, type BaseMessage } from "@langchain/core/messages";import { ChatGenerationChunk } from "@langchain/core/outputs";export interface CustomChatModelInput extends BaseChatModelParams { n: number;}export class CustomChatModel extends SimpleChatModel { n: number; constructor(fields: CustomChatModelInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<string> { if (!messages.length) { throw new Error("No messages provided."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } return messages[0].content.slice(0, this.n); } async *_streamResponseChunks( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<ChatGenerationChunk> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of messages[0].content.slice(0, this.n)) { yield new ChatGenerationChunk({ message: new AIMessageChunk({ content: letter, }), text: letter, }); // Trigger the appropriate callback for new chunks await runManager?.handleLLMNewToken(letter); } }}
We can now use this as any other chat model:
const chatModel = new CustomChatModel({ n: 4 });await chatModel.invoke([["human", "I am an LLM"]]);
AIMessage { content: 'I am', additional_kwargs: {}}
And support streaming:
const stream = await chatModel.stream([["human", "I am an LLM"]]);for await (const chunk of stream) { console.log(chunk);}
AIMessageChunk { content: 'I', additional_kwargs: {}}AIMessageChunk { content: ' ', additional_kwargs: {}}AIMessageChunk { content: 'a', additional_kwargs: {}}AIMessageChunk { content: 'm', additional_kwargs: {}}
Richer outputs[](#richer-outputs "Direct link to Richer outputs")
------------------------------------------------------------------
If you want to take advantage of LangChain's callback system for functionality like token tracking, you can extend the [`BaseChatModel`](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html) class and implement the lower level `_generate` method. It also takes a list of `BaseMessage`s as input, but requires you to construct and return a `ChatGeneration` object that permits additional metadata. Here's an example:
import { AIMessage, BaseMessage } from "@langchain/core/messages";import { ChatResult } from "@langchain/core/outputs";import { BaseChatModel, BaseChatModelCallOptions, BaseChatModelParams,} from "@langchain/core/language_models/chat_models";import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";export interface AdvancedCustomChatModelOptions extends BaseChatModelCallOptions {}export interface AdvancedCustomChatModelParams extends BaseChatModelParams { n: number;}export class AdvancedCustomChatModel extends BaseChatModel<AdvancedCustomChatModelOptions> { n: number; static lc_name(): string { return "AdvancedCustomChatModel"; } constructor(fields: AdvancedCustomChatModelParams) { super(fields); this.n = fields.n; } async _generate( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<ChatResult> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); const content = messages[0].content.slice(0, this.n); const tokenUsage = { usedTokens: this.n, }; return { generations: [{ message: new AIMessage({ content }), text: content }], llmOutput: { tokenUsage }, }; } _llmType(): string { return "advanced_custom_chat_model"; }}
This will pass the additional returned information in callback events and in the \`streamEvents method:
const chatModel = new AdvancedCustomChatModel({ n: 4 });const eventStream = await chatModel.streamEvents([["human", "I am an LLM"]], { version: "v1",});for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); }}
{ "event": "on_llm_end", "name": "AdvancedCustomChatModel", "run_id": "b500b98d-bee5-4805-9b92-532a491f5c70", "tags": [], "metadata": {}, "data": { "output": { "generations": [ [ { "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "I am", "additional_kwargs": {} } }, "text": "I am" } ] ], "llmOutput": { "tokenUsage": { "usedTokens": 4 } } } }}
Tracing (advanced)[](#tracing-advanced "Direct link to Tracing (advanced)")
----------------------------------------------------------------------------
If you are implementing a custom chat model and want to use it with a tracing service like [LangSmith](https://smith.langchain.com/), you can automatically log params used for a given invocation by implementing the `invocationParams()` method on the model.
This method is purely optional, but anything it returns will be logged as metadata for the trace.
Here's one pattern you might use:
export interface CustomChatModelOptions extends BaseChatModelCallOptions { // Some required or optional inner args tools: Record<string, any>[];}export interface CustomChatModelParams extends BaseChatModelParams { temperature: number;}export class CustomChatModel extends BaseChatModel<CustomChatModelOptions> { temperature: number; static lc_name(): string { return "CustomChatModel"; } constructor(fields: CustomChatModelParams) { super(fields); this.temperature = fields.temperature; } // Anything returned in this method will be logged as metadata in the trace. // It is common to pass it any options used to invoke the function. invocationParams(options?: this["ParsedCallOptions"]) { return { tools: options?.tools, n: this.n, }; } async _generate( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<ChatResult> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } const additionalParams = this.invocationParams(options); const content = await someAPIRequest(messages, additionalParams); return { generations: [{ message: new AIMessage({ content }), text: content }], llmOutput: {}, }; } _llmType(): string { return "advanced_custom_chat_model"; }}
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add ad-hoc tool calling capability to LLMs and Chat Models
](/v0.2/docs/how_to/tools_prompting)[
Next
How to do per-user retrieval
](/v0.2/docs/how_to/qa_per_user)
* [Richer outputs](#richer-outputs)
* [Tracing (advanced)](#tracing-advanced)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/tools_prompting | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add ad-hoc tool calling capability to LLMs and Chat Models
On this page
How to add ad-hoc tool calling capability to LLMs and Chat Models
=================================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Tool calling](/v0.2/docs/how_to/tool_calling/)
In this guide we’ll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the [Quickstart](/v0.2/docs/how_to/tool_calling)) and instead just prompts the model directly to invoke tools.
Setup[](#setup "Direct link to Setup")
---------------------------------------
We’ll need to install the following packages:
* npm
* yarn
* pnpm
npm i @langchain/core zod
yarn add @langchain/core zod
pnpm add @langchain/core zod
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Create a tool[](#create-a-tool "Direct link to Create a tool")
---------------------------------------------------------------
First, we need to create a tool to call. For this example, we will create a custom tool from a function. For more information on all details related to creating custom tools, please see [this guide](/v0.2/docs/how_to/custom_tools).
import { StructuredTool } from "@langchain/core/tools";import { z } from "zod";class Multiply extends StructuredTool { schema = z.object({ first_int: z.number(), second_int: z.number(), }); name = "multiply"; description = "Multiply two integers together."; async _call(input: z.infer<typeof this.schema>) { return (input.first_int * input.second_int).toString(); }}const multiply = new Multiply();
console.log(multiply.name);console.log(multiply.description);
multiplyMultiply two integers together.
await multiply.invoke({ first_int: 4, second_int: 5 });
20
Creating our prompt[](#creating-our-prompt "Direct link to Creating our prompt")
---------------------------------------------------------------------------------
We’ll want to write a prompt that specifies the tools the model has access to, the arguments to those tools, and the desired output format of the model. In this case we’ll instruct it to output a JSON blob of the form `{"name": "...", "arguments": {...}}`.
import { renderTextDescription } from "langchain/tools/render";const renderedTools = renderTextDescription([multiply]);
import { ChatPromptTemplate } from "@langchain/core/prompts";const systemPrompt = `You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool:{rendered_tools}Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", systemPrompt], ["user", "{input}"],]);
Adding an output parser[](#adding-an-output-parser "Direct link to Adding an output parser")
---------------------------------------------------------------------------------------------
We’ll use the `JsonOutputParser` for parsing our models output to JSON.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { JsonOutputParser } from "@langchain/core/output_parsers";const chain = prompt.pipe(model).pipe(new JsonOutputParser());await chain.invoke({ input: "what's thirteen times 4", rendered_tools: renderedTools,});
{ name: 'multiply', arguments: [ 13, 4 ] }
Invoking the tool[](#invoking-the-tool "Direct link to Invoking the tool")
---------------------------------------------------------------------------
We can invoke the tool as part of the chain by passing along the model-generated “arguments” to it:
import { RunnableLambda, RunnablePick } from "@langchain/core/runnables";const chain = prompt .pipe(model) .pipe(new JsonOutputParser()) .pipe(new RunnablePick("arguments")) .pipe( new RunnableLambda({ func: (input) => multiply.invoke({ first_int: input[0], second_int: input[1], }), }) );await chain.invoke({ input: "what's thirteen times 4", rendered_tools: renderedTools,});
52
Choosing from multiple tools[](#choosing-from-multiple-tools "Direct link to Choosing from multiple tools")
------------------------------------------------------------------------------------------------------------
Suppose we have multiple tools we want the chain to be able to choose from:
class Add extends StructuredTool { schema = z.object({ first_int: z.number(), second_int: z.number(), }); name = "add"; description = "Add two integers together."; async _call(input: z.infer<typeof this.schema>) { return (input.first_int + input.second_int).toString(); }}const add = new Add();class Exponentiate extends StructuredTool { schema = z.object({ first_int: z.number(), second_int: z.number(), }); name = "exponentiate"; description = "Exponentiate the base to the exponent power."; async _call(input: z.infer<typeof this.schema>) { return Math.pow(input.first_int, input.second_int).toString(); }}const exponentiate = new Exponentiate();
With function calling, we can do this like so:
If we want to run the model selected tool, we can do so using a function that returns the tool based on the model output. Specifically, our function will action return it’s own subchain that gets the “arguments” part of the model output and passes it to the chosen tool:
import { StructuredToolInterface } from "@langchain/core/tools";const tools = [add, exponentiate, multiply];const toolChain = (modelOutput) => { const toolMap: Record<string, StructuredToolInterface> = Object.fromEntries( tools.map((tool) => [tool.name, tool]) ); const chosenTool = toolMap[modelOutput.name]; return new RunnablePick("arguments").pipe( new RunnableLambda({ func: (input) => chosenTool.invoke({ first_int: input[0], second_int: input[1], }), }) );};const toolChainRunnable = new RunnableLambda({ func: toolChain,});const renderedTools = renderTextDescription(tools);const systemPrompt = `You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool:{rendered_tools}Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", systemPrompt], ["user", "{input}"],]);const chain = prompt .pipe(model) .pipe(new JsonOutputParser()) .pipe(toolChainRunnable);await chain.invoke({ input: "what's 3 plus 1132", rendered_tools: renderedTools,});
1135
Returning tool inputs[](#returning-tool-inputs "Direct link to Returning tool inputs")
---------------------------------------------------------------------------------------
It can be helpful to return not only tool outputs but also tool inputs. We can easily do this with LCEL by `RunnablePassthrough.assign`\-ing the tool output. This will take whatever the input is to the RunnablePassrthrough components (assumed to be a dictionary) and add a key to it while still passing through everything that’s currently in the input:
import { RunnablePassthrough } from "@langchain/core/runnables";const chain = prompt .pipe(model) .pipe(new JsonOutputParser()) .pipe(RunnablePassthrough.assign({ output: toolChainRunnable }));await chain.invoke({ input: "what's 3 plus 1132", rendered_tools: renderedTools,});
{ name: 'add', arguments: [ 3, 1132 ], output: '1135' }
What’s next?[](#whats-next "Direct link to What’s next?")
----------------------------------------------------------
This how-to guide shows the “happy path” when the model correctly outputs all the required tool information.
In reality, if you’re using more complex tools, you will start encountering errors from the model, especially for models that have not been fine tuned for tool calling and for less capable models.
You will need to be prepared to add strategies to improve the output from the model; e.g.,
* Provide few shot examples.
* Add error handling (e.g., catch the exception and feed it back to the LLM to ask it to correct its previous output).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to return structured data from a model
](/v0.2/docs/how_to/structured_output)[
Next
How to create a custom chat model class
](/v0.2/docs/how_to/custom_chat)
* [Setup](#setup)
* [Create a tool](#create-a-tool)
* [Creating our prompt](#creating-our-prompt)
* [Adding an output parser](#adding-an-output-parser)
* [Invoking the tool](#invoking-the-tool)
* [Choosing from multiple tools](#choosing-from-multiple-tools)
* [Returning tool inputs](#returning-tool-inputs)
* [What’s next?](#whats-next)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/prompts_composition | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to compose prompts together
On this page
How to compose prompts together
===============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.
String prompt composition[](#string-prompt-composition "Direct link to String prompt composition")
---------------------------------------------------------------------------------------------------
When working with string prompts, each template is joined together. You can work with either prompts directly or strings (the first element in the list needs to be a prompt).
import { PromptTemplate } from "@langchain/core/prompts";const prompt = PromptTemplate.fromTemplate( `Tell me a joke about {topic}, make it funny and in {language}`);prompt;
PromptTemplate { lc_serializable: true, lc_kwargs: { inputVariables: [ "topic", "language" ], templateFormat: "f-string", template: "Tell me a joke about {topic}, make it funny and in {language}" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [ "topic", "language" ], outputParser: undefined, partialVariables: undefined, templateFormat: "f-string", template: "Tell me a joke about {topic}, make it funny and in {language}", validateTemplate: true}
await prompt.format({ topic: "sports", language: "spanish" });
"Tell me a joke about sports, make it funny and in spanish"
Chat prompt composition[](#chat-prompt-composition "Direct link to Chat prompt composition")
---------------------------------------------------------------------------------------------
A chat prompt is made up a of a list of messages. Similarly to the above example, we can concatenate chat prompt templates. Each new element is a new message in the final prompt.
First, let’s initialize the a [`ChatPromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html) with a [`SystemMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.system.SystemMessage.html).
import { AIMessage, HumanMessage, SystemMessage,} from "@langchain/core/messages";const prompt = new SystemMessage("You are a nice pirate");
You can then easily create a pipeline combining it with other messages _or_ message templates. Use a `BaseMessage` when there are no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a [`HumanMessagePromptTemplate`](https://v02.api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html).)
import { HumanMessagePromptTemplate } from "@langchain/core/prompts";const newPrompt = HumanMessagePromptTemplate.fromTemplate([ prompt, new HumanMessage("Hi"), new AIMessage("what?"), "{input}",]);
Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before!
await newPrompt.formatMessages({ input: "i said hi" });
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: [ { type: "text", text: "You are a nice pirate" }, { type: "text", text: "Hi" }, { type: "text", text: "what?" }, { type: "text", text: "i said hi" } ], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: [ { type: "text", text: "You are a nice pirate" }, { type: "text", text: "Hi" }, { type: "text", text: "what?" }, { type: "text", text: "i said hi" } ], name: undefined, additional_kwargs: {}, response_metadata: {} }]
Using PipelinePrompt[](#using-pipelineprompt "Direct link to Using PipelinePrompt")
------------------------------------------------------------------------------------
LangChain includes a class called [`PipelinePromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html), which can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts:
* Final prompt: The final prompt that is returned
* Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
import { PromptTemplate, PipelinePromptTemplate,} from "@langchain/core/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});
const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);
You are impersonating Elon Musk.Here's an example of an interaction:Q: What's your favorite car?A: TelsaNow, do this for real!Q: What's your favorite social media site?A:
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to compose prompts together.
Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to pass through arguments from one step to the next
](/v0.2/docs/how_to/passthrough)[
Next
How to use legacy LangChain Agents (AgentExecutor)
](/v0.2/docs/how_to/agent_executor)
* [String prompt composition](#string-prompt-composition)
* [Chat prompt composition](#chat-prompt-composition)
* [Using PipelinePrompt](#using-pipelineprompt)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/passthrough | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to pass through arguments from one step to the next
On this page
How to pass through arguments from one step to the next
=======================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Calling runnables in parallel](/v0.2/docs/how_to/parallel/)
* [Custom functions](/v0.2/docs/how_to/functions/)
When composing chains with several steps, sometimes you will want to pass data from previous steps unchanged for use as input to a later step. The [`RunnablePassthrough`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) class allows you to do just this, and is typically is used in conjuction with a [RunnableParallel](/v0.2/docs/how_to/parallel/) to pass data through to a later step in your constructed chains.
Let’s look at an example:
import { RunnableParallel, RunnablePassthrough,} from "@langchain/core/runnables";const runnable = RunnableParallel.from({ passed: new RunnablePassthrough(), modified: (input) => input.num + 1,});await runnable.invoke({ num: 1 });
{ passed: { num: 1 }, modified: 2 }
As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`.
We also set a second key in the map with `modified`. This uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`.
Retrieval Example[](#retrieval-example "Direct link to Retrieval Example")
---------------------------------------------------------------------------
In the example below, we see a more real-world use case where we use `RunnablePassthrough` along with `RunnableParallel` in a chain to properly format inputs to a prompt:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( [{ pageContent: "harrison worked at kensho", metadata: {} }], new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({ model: "gpt-4o" });const retrievalChain = RunnableSequence.from([ { context: retriever.pipe((docs) => docs[0].pageContent), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);await retrievalChain.invoke("where did harrison work?");
"Harrison worked at Kensho."
Here the input to prompt is expected to be a map with keys `"context"` and `"question"`. The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the `"question"` key. The `RunnablePassthrough` allows us to pass on the user’s question to the prompt and model.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you’ve learned how to pass data through your chains to help to help format the data flowing through your chains.
To learn more, see the other how-to guides on runnables in this section.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to track token usage
](/v0.2/docs/how_to/llm_token_usage_tracking)[
Next
How to compose prompts together
](/v0.2/docs/how_to/prompts_composition)
* [Retrieval Example](#retrieval-example)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/agent_executor | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use legacy LangChain Agents (AgentExecutor)
On this page
How to use legacy LangChain Agents (AgentExecutor)
==================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Tools](/v0.2/docs/concepts#tools)
By themselves, language models can’t take actions - they just output text. Agents are systems that use an LLM as a reasoning engineer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.
info
This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we’d recommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph).
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
Concepts we will cover are: - Using [language models](/v0.2/docs/concepts/#chat-models), in particular their tool calling ability - Creating a [Retriever](/v0.2/docs/concepts/#retrievers) to expose specific information to our agent - Using a Search [Tool](/v0.2/docs/concepts/#tools) to look up things online - [`Chat History`](/v0.2/docs/concepts/#chat-history), which allows a chatbot to “remember” past interactions and take them into account when responding to followup questions. - Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Jupyter Notebook[](#jupyter-notebook "Direct link to Jupyter Notebook")
This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.
This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.
### Installation[](#installation "Direct link to Installation")
To install LangChain (and `cheerio` for the web loader) run:
* npm
* yarn
* pnpm
npm i langchain cheerio
yarn add langchain cheerio
pnpm add langchain cheerio
For more details, see our [Installation guide](/v0.2/docs/how_to/installation/).
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Define tools[](#define-tools "Direct link to Define tools")
------------------------------------------------------------
We first need to create the tools we want to use. We will use two tools: [Tavily](/v0.2/docs/integrations/tools/tavily_search) (to search online) and then a retriever over a local index we will create
### [Tavily](/v0.2/docs/integrations/tools/tavily_search)[](#tavily "Direct link to tavily")
We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires an API key - they have a free tier, but if you don’t have one or don’t want to create one, you can always ignore this step.
Once you create your API key, you will need to export that as:
export TAVILY_API_KEY="..."
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const search = new TavilySearchResults({ maxResults: 2,});await search.invoke("what is the weather in SF");
`[{"title":"Weather in San Francisco","url":"https://www.weatherapi.com/","content":"{'location': {'n`... 1111 more characters
### Retriever[](#retriever "Direct link to Retriever")
We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/v0.2/docs/tutorials/rag).
import "cheerio"; // This is required in notebooks to use the `CheerioWebBaseLoader`import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/overview");const docs = await loader.load();const documents = await new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,}).splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( documents, new OpenAIEmbeddings());const retriever = vectorStore.asRetriever();(await retriever.invoke("how to upload a dataset"))[0];
Document { pageContent: 'description="A sample dataset in LangSmith.")client.create_examples( inputs=[ {"postfix": '... 891 more characters, metadata: { source: "https://docs.smith.langchain.com/overview", loc: { lines: { from: 4, to: 4 } } }}
Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)
import { createRetrieverTool } from "langchain/tools/retriever";const retrieverTool = await createRetrieverTool(retriever, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",});
### Tools[](#tools "Direct link to Tools")
Now that we have created both, we can create a list of tools that we will use downstream.
const tools = [search, retrieverTool];
Using Language Models[](#using-language-models "Direct link to Using Language Models")
---------------------------------------------------------------------------------------
Next, let’s learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI(model: "gpt-4");
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
You can call the language model by passing in a list of messages. By default, the response is a `content` string.
import { HumanMessage } from "@langchain/core/messages";const response = await model.invoke([new HumanMessage("hi!")]);response.content;
"Hello! How can I assist you today?"
We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind` to give the language model knowledge of these tools
const modelWithTools = model.bindTools(tools);
We can now call the model. Let’s first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field.
const response = await modelWithTools.invoke([new HumanMessage("Hi!")]);console.log(`Content: ${response.content}`);console.log(`Tool calls: ${response.tool_calls}`);
Content: Hello! How can I assist you today?Tool calls:
Now, let’s try calling it with some input that would expect a tool to be called.
const response = await modelWithTools.invoke([ new HumanMessage("What's the weather in SF?"),]);console.log(`Content: ${response.content}`);console.log(`Tool calls: ${JSON.stringify(response.tool_calls, null, 2)}`);
Content:Tool calls: [ { "name": "tavily_search_results_json", "args": { "input": "weather in San Francisco" }, "id": "call_y0nn6mbVCV5paX6RrqqFUqdC" }]
We can see that there’s now no content, but there is a tool call! It wants us to call the Tavily Search tool.
This isn’t calling that tool yet - it’s just telling us to. In order to actually calll it, we’ll want to create our agent.
Create the agent[](#create-the-agent "Direct link to Create the agent")
------------------------------------------------------------------------
Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/v0.2/docs/concepts/#agent_types/).
We can first choose the prompt we want to use to guide the agent.
If you want to see the contents of this prompt in the hub, you can go to:
[https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent)
import { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";// Get the prompt to use - you can modify this!const prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");console.log(prompt.promptMessages);
[ SystemMessagePromptTemplate { lc_serializable: true, lc_kwargs: { prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { template: "You are a helpful assistant", inputVariables: [], templateFormat: "f-string", partialVariables: {} }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [], outputParser: undefined, partialVariables: {}, template: "You are a helpful assistant", templateFormat: "f-string", validateTemplate: true } }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], inputVariables: [], additionalOptions: {}, prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { template: "You are a helpful assistant", inputVariables: [], templateFormat: "f-string", partialVariables: {} }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [], outputParser: undefined, partialVariables: {}, template: "You are a helpful assistant", templateFormat: "f-string", validateTemplate: true }, messageClass: undefined, chatMessageClass: undefined }, MessagesPlaceholder { lc_serializable: true, lc_kwargs: { optional: true, variableName: "chat_history" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], variableName: "chat_history", optional: true }, HumanMessagePromptTemplate { lc_serializable: true, lc_kwargs: { prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { template: "{input}", inputVariables: [Array], templateFormat: "f-string", partialVariables: {} }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [ "input" ], outputParser: undefined, partialVariables: {}, template: "{input}", templateFormat: "f-string", validateTemplate: true } }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], inputVariables: [ "input" ], additionalOptions: {}, prompt: PromptTemplate { lc_serializable: true, lc_kwargs: { template: "{input}", inputVariables: [ "input" ], templateFormat: "f-string", partialVariables: {} }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "prompt" ], inputVariables: [ "input" ], outputParser: undefined, partialVariables: {}, template: "{input}", templateFormat: "f-string", validateTemplate: true }, messageClass: undefined, chatMessageClass: undefined }, MessagesPlaceholder { lc_serializable: true, lc_kwargs: { optional: false, variableName: "agent_scratchpad" }, lc_runnable: true, name: undefined, lc_namespace: [ "langchain_core", "prompts", "chat" ], variableName: "agent_scratchpad", optional: false }]
Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/v0.2/docs/concepts/#agents).
Note that we are passing in the `model`, not `modelWithTools`. That is because `createToolCallingAgent` will call `.bind` for us under the hood.
import { createToolCallingAgent } from "langchain/agents";const agent = await createToolCallingAgent({ llm: model, tools, prompt });
Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools).
import { AgentExecutor } from "langchain/agents";const agentExecutor = new AgentExecutor({ agent, tools,});
Run the agent[](#run-the-agent "Direct link to Run the agent")
---------------------------------------------------------------
We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won’t remember previous interactions).
First up, let’s how it responds when there’s no need to call a tool:
await agentExecutor.invoke({ input: "hi!" });
{ input: "hi!", output: "Hello! How can I assist you today?" }
In order to see exactly what is happening under the hood (and to make sure it’s not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/b8051e80-14fd-4931-be0f-6416280bc500/r)
Let’s now try it out on an example where it should be invoking the retriever
await agentExecutor.invoke({ input: "how can langsmith help with testing?" });
{ input: "how can langsmith help with testing?", output: "LangSmith can help with testing by providing a platform for building production-grade LLM applicatio"... 880 more characters}
Let’s take a look at the [LangSmith trace](https://smith.langchain.com/public/35bd4f0f-aa2f-4ac2-b9a9-89ce0ca306ca/r) to make sure it’s actually calling that.
Now let’s try one where it needs to call the search tool:
await agentExecutor.invoke({ input: "whats the weather in sf?" });
{ input: "whats the weather in sf?", output: "The current weather in San Francisco is partly cloudy with a temperature of 64.0°F (17.8°C). The win"... 112 more characters}
We can check out the [LangSmith trace](https://smith.langchain.com/public/dfde6f46-0e7b-4dfe-813c-87d7bfb2ade5/r) to make sure it’s calling the search tool effectively.
Adding in memory[](#adding-in-memory "Direct link to Adding in memory")
------------------------------------------------------------------------
As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`.
**Note**: The input variable needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name.
// Here we pass in an empty list of messages for chat_history because it is the first message in the chatawait agentExecutor.invoke({ input: "hi! my name is bob", chat_history: [] });
{ input: "hi! my name is bob", chat_history: [], output: "Hello Bob! How can I assist you today?"}
import { AIMessage, HumanMessage } from "@langchain/core/messages";await agentExecutor.invoke({ chat_history: [ new HumanMessage("hi! my name is bob"), new AIMessage("Hello Bob! How can I assist you today?"), ], input: "what's my name?",});
{ chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! my name is bob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! my name is bob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Bob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Bob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], input: "what's my name?", output: "Your name is Bob! How can I help you, Bob?"}
If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory.
Because we have multiple inputs, we need to specify two things:
* `input_messages_key`: The input key to use to add to the conversation history.
* `history_messages_key`: The key to add the loaded messages into.
For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history).
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";import { BaseChatMessageHistory } from "@langchain/core/chat_history";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const store = {};function getMessageHistory(sessionId: string): BaseChatMessageHistory { if (!(sessionId in store)) { store[sessionId] = new ChatMessageHistory(); } return store[sessionId];}const agentWithChatHistory = new RunnableWithMessageHistory({ runnable: agentExecutor, getMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});await agentWithChatHistory.invoke( { input: "hi! I'm bob" }, { configurable: { sessionId: "<foo>" } });
{ input: "hi! I'm bob", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! I'm bob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! I'm bob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Bob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Bob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "Hello Bob! How can I assist you today?"}
await agentWithChatHistory.invoke( { input: "what's my name?" }, { configurable: { sessionId: "<foo>" } });
{ input: "what's my name?", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi! I'm bob", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi! I'm bob", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Bob! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Bob! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "what's my name?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what's my name?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Your name is Bob! How can I help you, Bob?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Your name is Bob! How can I help you, Bob?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "Your name is Bob! How can I help you, Bob?"}
Example LangSmith trace: [https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r](https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r)
Conclusion[](#conclusion "Direct link to Conclusion")
------------------------------------------------------
That’s a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there’s lot to learn!
info
This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we’d recommend checking out [LangGraph](/v0.2/docs/concepts/#langgraph)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to compose prompts together
](/v0.2/docs/how_to/prompts_composition)[
Next
How to add values to a chain's state
](/v0.2/docs/how_to/assign)
* [Concepts](#concepts)
* [Setup](#setup)
* [Jupyter Notebook](#jupyter-notebook)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Define tools](#define-tools)
* [Tavily](#tavily)
* [Retriever](#retriever)
* [Tools](#tools)
* [Using Language Models](#using-language-models)
* [Create the agent](#create-the-agent)
* [Run the agent](#run-the-agent)
* [Adding in memory](#adding-in-memory)
* [Conclusion](#conclusion)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/assign | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add values to a chain's state
On this page
How to add values to a chain's state
====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Calling runnables in parallel](/v0.2/docs/how_to/parallel/)
* [Custom functions](/v0.2/docs/how_to/functions/)
* [Passing data through](/v0.2/docs/how_to/passthrough)
An alternate way of [passing data through](/v0.2/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html#assign-2) static method takes an input value and adds the extra arguments passed to the assign function.
This is useful in the common [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.
Here’s an example:
import { RunnableParallel, RunnablePassthrough,} from "@langchain/core/runnables";const runnable = RunnableParallel.from({ extra: RunnablePassthrough.assign({ mult: (input: { num: number }) => input.num * 3, modified: (input: { num: number }) => input.num + 1, }),});await runnable.invoke({ num: 1 });
{ extra: { num: 1, mult: 3, modified: 2 } }
Let’s break down what’s happening here.
* The input to the chain is `{"num": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input.
* The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{"num": 1}`), and assigns a new key called `mult`. The value is `lambda x: x["num"] * 3)`, which is `3`. Thus, the result is `{"num": 1, "mult": 3}`.
* `{"num": 1, "mult": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`.
* At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `"num"` from its input and adds one.
Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`.
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
One convenient feature of this method is that it allows values to pass through as soon as they are available. To show this off, we’ll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( [{ pageContent: "harrison worked at kensho", metadata: {} }], new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({ model: "gpt-4o" });const generationChain = prompt.pipe(model).pipe(new StringOutputParser());const retrievalChain = RunnableSequence.from([ { context: retriever.pipe((docs) => docs[0].pageContent), question: new RunnablePassthrough(), }, RunnablePassthrough.assign({ output: generationChain }),]);const stream = await retrievalChain.stream("where did harrison work?");for await (const chunk of stream) { console.log(chunk);}
{ question: "where did harrison work?" }{ context: "harrison worked at kensho" }{ output: "" }{ output: "H" }{ output: "arrison" }{ output: " worked" }{ output: " at" }{ output: " Kens" }{ output: "ho" }{ output: "." }{ output: "" }
We can see that the first chunk contains the original `"question"` since that is immediately available. The second chunk contains `"context"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now you’ve learned how to pass data through your chains to help to help format the data flowing through your chains.
To learn more, see the other how-to guides on runnables in this section.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use legacy LangChain Agents (AgentExecutor)
](/v0.2/docs/how_to/agent_executor)[
Next
How to attach runtime arguments to a Runnable
](/v0.2/docs/how_to/binding)
* [Streaming](#streaming)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/binding | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to attach runtime arguments to a Runnable
On this page
How to attach runtime arguments to a Runnable
=============================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Tool calling](/v0.2/docs/how_to/tool_calling/)
Sometimes we want to invoke a [`Runnable`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) within a [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#bind) method to set these arguments ahead of time.
Binding stop sequences[](#binding-stop-sequences "Direct link to Binding stop sequences")
------------------------------------------------------------------------------------------
Suppose we have a simple prompt + model chain:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const prompt = ChatPromptTemplate.fromMessages([ [ "system", "Write out the following equation using algebraic symbols then solve it. Use the format\n\nEQUATION:...\nSOLUTION:...\n\n", ], ["human", "{equation_statement}"],]);const model = new ChatOpenAI({ temperature: 0 });const runnable = prompt.pipe(model).pipe(new StringOutputParser());const res = await runnable.invoke({ equation_statement: "x raised to the third plus seven equals 12",});console.log(res);
EQUATION: x^3 + 7 = 12SOLUTION:Subtract 7 from both sides:x^3 = 5Take the cube root of both sides:x = ∛5
and want to call the model with certain `stop` words so that we shorten the output, which is useful in certain types of prompting techniques. While we can pass some arguments into the constructor, other runtime args use the `.bind()` method as follows:
const runnable = prompt .pipe(model.bind({ stop: "SOLUTION" })) .pipe(new StringOutputParser());const res = await runnable.invoke({ equation_statement: "x raised to the third plus seven equals 12",});console.log(res);
EQUATION: x^3 + 7 = 12
What you can bind to a Runnable will depend on the extra parameters you can pass when invoking it.
Attaching OpenAI tools[](#attaching-openai-tools "Direct link to Attaching OpenAI tools")
------------------------------------------------------------------------------------------
Another common use-case is tool calling. While you should generally use the [`.bind_tools()`](/v0.2/docs/how_to/tool_calling/) method for tool-calling models, you can also bind provider-specific args directly if you want lower level control:
const tools = [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, },];const model = new ChatOpenAI({ model: "gpt-4o" }).bind({ tools });await model.invoke("What's the weather in SF, NYC and LA?");
AIMessage { lc_serializable: true, lc_kwargs: { content: "", tool_calls: [ { name: "get_current_weather", args: { location: "San Francisco, CA" }, id: "call_iDKz4zU8PKBaaIT052fJkMMF" }, { name: "get_current_weather", args: { location: "New York, NY" }, id: "call_niQwZDOqO6OJTBiDBFG8FODc" }, { name: "get_current_weather", args: { location: "Los Angeles, CA" }, id: "call_zLXH2cDVQy0nAVC0ViWuEP4m" } ], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_iDKz4zU8PKBaaIT052fJkMMF", type: "function", function: [Object] }, { id: "call_niQwZDOqO6OJTBiDBFG8FODc", type: "function", function: [Object] }, { id: "call_zLXH2cDVQy0nAVC0ViWuEP4m", type: "function", function: [Object] } ] }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_iDKz4zU8PKBaaIT052fJkMMF", type: "function", function: { name: "get_current_weather", arguments: '{"location": "San Francisco, CA"}' } }, { id: "call_niQwZDOqO6OJTBiDBFG8FODc", type: "function", function: { name: "get_current_weather", arguments: '{"location": "New York, NY"}' } }, { id: "call_zLXH2cDVQy0nAVC0ViWuEP4m", type: "function", function: { name: "get_current_weather", arguments: '{"location": "Los Angeles, CA"}' } } ] }, response_metadata: { tokenUsage: { completionTokens: 70, promptTokens: 82, totalTokens: 152 }, finish_reason: "tool_calls" }, tool_calls: [ { name: "get_current_weather", args: { location: "San Francisco, CA" }, id: "call_iDKz4zU8PKBaaIT052fJkMMF" }, { name: "get_current_weather", args: { location: "New York, NY" }, id: "call_niQwZDOqO6OJTBiDBFG8FODc" }, { name: "get_current_weather", args: { location: "Los Angeles, CA" }, id: "call_zLXH2cDVQy0nAVC0ViWuEP4m" } ], invalid_tool_calls: []}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You now know how to bind runtime arguments to a Runnable.
Next, you might be interested in our how-to guides on [passing data through a chain](/v0.2/docs/how_to/passthrough/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add values to a chain's state
](/v0.2/docs/how_to/assign)[
Next
How to cache embedding results
](/v0.2/docs/how_to/caching_embeddings)
* [Binding stop sequences](#binding-stop-sequences)
* [Attaching OpenAI tools](#attaching-openai-tools)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/chatbots_tools | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use tools
On this page
How to use tools
================
Prerequisites
This guide assumes familiarity with the following:
* [Chatbots](/v0.2/docs/tutorials/chatbot)
* [Tools](/v0.2/docs/concepts#tools)
This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.
Setup[](#setup "Direct link to Setup")
---------------------------------------
For this guide, we’ll be using an OpenAI tools agent with a single tool for searching the web. The default will be powered by [Tavily](/v0.2/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you’re using Tavily.
You’ll need to [sign up for an account on the Tavily website](https://tavily.com), and install the following packages:
* npm
* yarn
* pnpm
npm i @langchain/core @langchain/openai langchain
yarn add @langchain/core @langchain/openai langchain
pnpm add @langchain/core @langchain/openai langchain
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { ChatOpenAI } from "@langchain/openai";const tools = [ new TavilySearchResults({ maxResults: 1, }),];const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});
To make our agent conversational, we must also choose a prompt with a placeholder for our chat history. Here’s an example:
import { ChatPromptTemplate } from "@langchain/core/prompts";// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ], ["placeholder", "{messages}"], ["placeholder", "{agent_scratchpad}"],]);
Great! Now let’s assemble our agent:
import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";const agent = await createOpenAIToolsAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools });
Running the agent[](#running-the-agent "Direct link to Running the agent")
---------------------------------------------------------------------------
Now that we’ve set up our agent, let’s try interacting with it! It can handle both trivial queries that require no lookup:
import { HumanMessage } from "@langchain/core/messages";await agentExecutor.invoke({ messages: [new HumanMessage("I'm Nemo!")],});
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "I'm Nemo!", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm Nemo!", name: undefined, additional_kwargs: {}, response_metadata: {} } ], output: "Hello Nemo! It's great to meet you. How can I assist you today?"}
Or, it can use of the passed search tool to get up to date information if needed:
await agentExecutor.invoke({ messages: [ new HumanMessage( "What is the current conservation status of the Great Barrier Reef?" ), ],});
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "What is the current conservation status of the Great Barrier Reef?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "What is the current conservation status of the Great Barrier Reef?", name: undefined, additional_kwargs: {}, response_metadata: {} } ], output: "The current conservation status of the Great Barrier Reef is a cause for concern. The International "... 801 more characters}
Conversational responses[](#conversational-responses "Direct link to Conversational responses")
------------------------------------------------------------------------------------------------
Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot:
import { AIMessage } from "@langchain/core/messages";await agentExecutor.invoke({ messages: [ new HumanMessage("I'm Nemo!"), new AIMessage("Hello Nemo! How can I assist you today?"), new HumanMessage("What is my name?"), ],});
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "I'm Nemo!", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm Nemo!", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Nemo! How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Nemo! How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "What is my name?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "What is my name?", name: undefined, additional_kwargs: {}, response_metadata: {} } ], output: "Your name is Nemo!"}
If preferred, you can also wrap the agent executor in a `RunnableWithMessageHistory` class to internally manage history messages. First, we need to slightly modify the prompt to take a separate input variable so that the wrapper can parse which input value to store as history:
// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt2 = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const agent2 = await createOpenAIToolsAgent({ llm, tools, prompt: prompt2,});const agentExecutor2 = new AgentExecutor({ agent: agent2, tools });
Then, because our agent executor has multiple outputs, we also have to set the `outputMessagesKey` property when initializing the wrapper:
import { ChatMessageHistory } from "langchain/stores/message/in_memory";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const demoEphemeralChatMessageHistory = new ChatMessageHistory();const conversationalAgentExecutor = new RunnableWithMessageHistory({ runnable: agentExecutor2, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory, inputMessagesKey: "input", outputMessagesKey: "output", historyMessagesKey: "chat_history",});
await conversationalAgentExecutor.invoke( { input: "I'm Nemo!" }, { configurable: { sessionId: "unused" } });
{ input: "I'm Nemo!", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "I'm Nemo!", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm Nemo!", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Nemo! It's great to meet you. How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Nemo! It's great to meet you. How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "Hello Nemo! It's great to meet you. How can I assist you today?"}
await conversationalAgentExecutor.invoke( { input: "What is my name?" }, { configurable: { sessionId: "unused" } });
{ input: "What is my name?", chat_history: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "I'm Nemo!", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm Nemo!", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello Nemo! It's great to meet you. How can I assist you today?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello Nemo! It's great to meet you. How can I assist you today?", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "What is my name?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "What is my name?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Your name is Nemo!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Your name is Nemo!", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] } ], output: "Your name is Nemo!"}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to create chatbots with tool-use capabilities.
For more, check out the other guides in this section, including [how to add history to your chatbots](/v0.2/docs/how_to/chatbots_memory).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to do retrieval
](/v0.2/docs/how_to/chatbots_retrieval)[
Next
How to split code
](/v0.2/docs/how_to/code_splitter)
* [Setup](#setup)
* [Running the agent](#running-the-agent)
* [Conversational responses](#conversational-responses)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/character_text_splitter | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split by character
On this page
How to split by character
=========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Text splitters](/v0.2/docs/concepts#text-splitters)
This is the simplest method for splitting text. This splits based on a given character sequence, which defaults to `"\n\n"`. Chunk length is measured by number of characters.
1. How the text is split: by single character separator.
2. How the chunk size is measured: by number of characters.
To obtain the string content directly, use `.splitText()`.
To create LangChain [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) objects (e.g., for use in downstream tasks), use `.createDocuments()`.
import { CharacterTextSplitter } from "@langchain/textsplitters";import * as fs from "node:fs";// Load an example documentconst rawData = await fs.readFileSync( "../../../../examples/state_of_the_union.txt");const stateOfTheUnion = rawData.toString();const textSplitter = new CharacterTextSplitter({ separator: "\n\n", chunkSize: 1000, chunkOverlap: 200,});const texts = await textSplitter.createDocuments([stateOfTheUnion]);console.log(texts[0]);
Document { pageContent: "Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and th"... 839 more characters, metadata: { loc: { lines: { from: 1, to: 17 } } }}
You can also propagate metadata associated with each document to the output chunks:
const metadatas = [{ document: 1 }, { document: 2 }];const documents = await textSplitter.createDocuments( [stateOfTheUnion, stateOfTheUnion], metadatas);console.log(documents[0]);
Document { pageContent: "Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and th"... 839 more characters, metadata: { document: 1, loc: { lines: { from: 1, to: 17 } } }}
To obtain the string content directly, use `.splitText()`:
const chunks = await textSplitter.splitText(stateOfTheUnion);chunks[0];
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and th"... 839 more characters
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned a method for splitting text by character.
Next, check out a [more advanced way of splitting by character](/v0.2/docs/how_to/recursive_text_splitter), or the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to cache embedding results
](/v0.2/docs/how_to/caching_embeddings)[
Next
How to manage memory
](/v0.2/docs/how_to/chatbots_memory)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/caching_embeddings | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to cache embedding results
On this page
How to cache embedding results
==============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Embeddings](/v0.2/docs/concepts/#embedding-models)
Embeddings can be stored or temporarily cached to avoid needing to recompute them.
Caching embeddings can be done using a `CacheBackedEmbeddings` instance.
The cache backed embedder is a wrapper around an embedder that caches embeddings in a key-value store.
The text is hashed and the hash is used as the key in the cache.
The main supported way to initialized a `CacheBackedEmbeddings` is the `fromBytesStore` static method. This takes in the following parameters:
* `underlyingEmbeddings`: The embeddings model to use.
* `documentEmbeddingCache`: The cache to use for storing document embeddings.
* `namespace`: (optional, defaults to "") The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, you could set it to the name of the embedding model used.
**Attention:** Be sure to set the namespace parameter to avoid collisions of the same text embedded using different embeddings models.
In-memory[](#in-memory "Direct link to In-memory")
---------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Here's a basic test example with an in memory cache. This type of cache is primarily useful for unit tests or prototyping. Do not use this cache if you need to actually store the embeddings for an extended period of time:
import { OpenAIEmbeddings } from "@langchain/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { InMemoryStore } from "@langchain/core/stores";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { TextLoader } from "langchain/document_loaders/fs/text";const underlyingEmbeddings = new OpenAIEmbeddings();const inMemoryStore = new InMemoryStore();const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, inMemoryStore, { namespace: underlyingEmbeddings.modelName, });const loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);// No keys logged yet since the cache is emptyfor await (const key of inMemoryStore.yieldKeys()) { console.log(key);}let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1905ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = Date.now();const vectorstore2 = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 8ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of inMemoryStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-0023b424f5ed1271a6f5601add17c1b58b7c992772e', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-00262f72e0c2d711c6b861714ee624b28af639fdb13', 'text-embedding-ada-00262d58882330038a4e6e25ea69a938f4391541874' ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CacheBackedEmbeddings](https://v02.api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Redis[](#redis "Direct link to Redis")
---------------------------------------
Here's an example with a Redis cache.
You'll first need to install `ioredis` as a peer dependency and pass in an initialized client:
* npm
* Yarn
* pnpm
npm install ioredis
yarn add ioredis
pnpm add ioredis
import { Redis } from "ioredis";import { OpenAIEmbeddings } from "@langchain/openai";import { CacheBackedEmbeddings } from "langchain/embeddings/cache_backed";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { FaissStore } from "@langchain/community/vectorstores/faiss";import { TextLoader } from "langchain/document_loaders/fs/text";import { RedisByteStore } from "@langchain/community/storage/ioredis";const underlyingEmbeddings = new OpenAIEmbeddings();// Requires a Redis instance running at http://localhost:6379.// See https://github.com/redis/ioredis for full config options.const redisClient = new Redis();const redisStore = new RedisByteStore({ client: redisClient,});const cacheBackedEmbeddings = CacheBackedEmbeddings.fromBytesStore( underlyingEmbeddings, redisStore, { namespace: underlyingEmbeddings.modelName, });const loader = new TextLoader("./state_of_the_union.txt");const rawDocuments = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 0,});const documents = await splitter.splitDocuments(rawDocuments);let time = Date.now();const vectorstore = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Initial creation time: ${Date.now() - time}ms`);/* Initial creation time: 1808ms*/// The second time is much faster since the embeddings for the input docs have already been added to the cachetime = Date.now();const vectorstore2 = await FaissStore.fromDocuments( documents, cacheBackedEmbeddings);console.log(`Cached creation time: ${Date.now() - time}ms`);/* Cached creation time: 33ms*/// Many keys logged with hashed valuesconst keys = [];for await (const key of redisStore.yieldKeys()) { keys.push(key);}console.log(keys.slice(0, 5));/* [ 'text-embedding-ada-002fa9ac80e1bf226b7b4dfc03ea743289a65a727b2', 'text-embedding-ada-0027dbf9c4b36e12fe1768300f145f4640342daaf22', 'text-embedding-ada-002ea9b59e760e64bec6ee9097b5a06b0d91cb3ab64', 'text-embedding-ada-002fec5d021611e1527297c5e8f485876ea82dcb111', 'text-embedding-ada-002c00f818c345da13fed9f2697b4b689338143c8c7' ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CacheBackedEmbeddings](https://v02.api.js.langchain.com/classes/langchain_embeddings_cache_backed.CacheBackedEmbeddings.html) from `langchain/embeddings/cache_backed`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [RedisByteStore](https://v02.api.js.langchain.com/classes/langchain_community_storage_ioredis.RedisByteStore.html) from `@langchain/community/storage/ioredis`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to use caching to avoid recomputing embeddings.
Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to attach runtime arguments to a Runnable
](/v0.2/docs/how_to/binding)[
Next
How to split by character
](/v0.2/docs/how_to/character_text_splitter)
* [In-memory](#in-memory)
* [Redis](#redis)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/chatbots_memory | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to manage memory
On this page
How to manage memory
====================
Prerequisites
This guide assumes familiarity with the following:
* [Chatbots](/v0.2/docs/tutorials/chatbot)
A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:
* Simply stuffing previous messages into a chat model prompt.
* The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.
* More complex modifications like synthesizing summaries for long running conversations.
We’ll go into more detail on a few techniques below!
Setup[](#setup "Direct link to Setup")
---------------------------------------
You’ll need to install a few packages, and set any LLM API keys:
Let’s also set up a chat model that we’ll use for the below examples:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
Message passing[](#message-passing "Direct link to Message passing")
---------------------------------------------------------------------
The simplest form of memory is simply passing chat history messages into a chain. Here’s an example:
import { HumanMessage, AIMessage } from "@langchain/core/messages";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("messages"),]);const chain = prompt.pipe(llm);await chain.invoke({ messages: [ new HumanMessage( "Translate this sentence from English to French: I love programming." ), new AIMessage("J'adore la programmation."), new HumanMessage("What did you just say?"), ],});
AIMessage { lc_serializable: true, lc_kwargs: { content: `I said "J'adore la programmation," which means "I love programming" in French.`, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: `I said "J'adore la programmation," which means "I love programming" in French.`, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 21, promptTokens: 61, totalTokens: 82 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
We can see that by passing the previous conversation into a chain, it can use it as context to answer questions. This is the basic concept underpinning chatbot memory - the rest of the guide will demonstrate convenient techniques for passing or reformatting messages.
Chat history[](#chat-history "Direct link to Chat history")
------------------------------------------------------------
It’s perfectly fine to store and pass messages directly as an array, but we can use LangChain’s built-in message history class to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers but for this demo we will use an ephemeral demo class.
Here’s an example of the API:
import { ChatMessageHistory } from "langchain/stores/message/in_memory";const demoEphemeralChatMessageHistory = new ChatMessageHistory();await demoEphemeralChatMessageHistory.addMessage( new HumanMessage( "Translate this sentence from English to French: I love programming." ));await demoEphemeralChatMessageHistory.addMessage( new AIMessage("J'adore la programmation."));await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Translate this sentence from English to French: I love programming.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate this sentence from English to French: I love programming.", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "J'adore la programmation.", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "J'adore la programmation.", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }]
We can use it directly to store conversation turns for our chain:
await demoEphemeralChatMessageHistory.clear();const input1 = "Translate this sentence from English to French: I love programming.";await demoEphemeralChatMessageHistory.addMessage(new HumanMessage(input1));const response = await chain.invoke({ messages: await demoEphemeralChatMessageHistory.getMessages(),});await demoEphemeralChatMessageHistory.addMessage(response);const input2 = "What did I just ask you?";await demoEphemeralChatMessageHistory.addMessage(new HumanMessage(input2));await chain.invoke({ messages: await demoEphemeralChatMessageHistory.getMessages(),});
AIMessage { lc_serializable: true, lc_kwargs: { content: 'You just asked me to translate the sentence "I love programming" from English to French.', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'You just asked me to translate the sentence "I love programming" from English to French.', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 18, promptTokens: 73, totalTokens: 91 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
Automatic history management[](#automatic-history-management "Direct link to Automatic history management")
------------------------------------------------------------------------------------------------------------
The previous examples pass messages to the chain explicitly. This is a completely acceptable approach, but it does require external management of new messages. LangChain also includes an wrapper for LCEL chains that can handle this process automatically called `RunnableWithMessageHistory`.
To show how it works, let’s slightly modify the above prompt to take a final `input` variable that populates a `HumanMessage` template after the chat history. This means that we will expect a `chat_history` parameter that contains all messages BEFORE the current messages instead of all messages:
const runnableWithMessageHistoryPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"],]);const chain2 = runnableWithMessageHistoryPrompt.pipe(llm);
We’ll pass the latest input to the conversation here and let the `RunnableWithMessageHistory` class wrap our chain and do the work of appending that `input` variable to the chat history.
Next, let’s declare our wrapped chain:
import { RunnableWithMessageHistory } from "@langchain/core/runnables";const demoEphemeralChatMessageHistoryForChain = new ChatMessageHistory();const chainWithMessageHistory = new RunnableWithMessageHistory({ runnable: chain2, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistoryForChain, inputMessagesKey: "input", historyMessagesKey: "chat_history",});
This class takes a few parameters in addition to the chain that we want to wrap:
* A factory function that returns a message history for a given session id. This allows your chain to handle multiple users at once by loading different messages for different conversations.
* An `inputMessagesKey` that specifies which part of the input should be tracked and stored in the chat history. In this example, we want to track the string passed in as input.
* A `historyMessagesKey` that specifies what the previous messages should be injected into the prompt as. Our prompt has a `MessagesPlaceholder` named `chat_history`, so we specify this property to match. (For chains with multiple outputs) an `outputMessagesKey` which specifies which output to store as history. This is the inverse of `inputMessagesKey`.
We can invoke this new chain as normal, with an additional `configurable` field that specifies the particular `sessionId` to pass to the factory function. This is unused for the demo, but in real-world chains, you’ll want to return a chat history corresponding to the passed session:
await chainWithMessageHistory.invoke( { input: "Translate this sentence from English to French: I love programming.", }, { configurable: { sessionId: "unused" } });
AIMessage { lc_serializable: true, lc_kwargs: { content: `The translation of "I love programming" in French is "J'adore la programmation."`, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: `The translation of "I love programming" in French is "J'adore la programmation."`, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 20, promptTokens: 39, totalTokens: 59 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
await chainWithMessageHistory.invoke( { input: "What did I just ask you?", }, { configurable: { sessionId: "unused" } });
AIMessage { lc_serializable: true, lc_kwargs: { content: 'You just asked for the translation of the sentence "I love programming" from English to French.', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'You just asked for the translation of the sentence "I love programming" from English to French.', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 19, promptTokens: 74, totalTokens: 93 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
Modifying chat history[](#modifying-chat-history "Direct link to Modifying chat history")
------------------------------------------------------------------------------------------
Modifying stored chat messages can help your chatbot handle a variety of situations. Here are some examples:
### Trimming messages[](#trimming-messages "Direct link to Trimming messages")
LLMs and chat models have limited context windows, and even if you’re not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is to only load and store the most recent `n` messages. Let’s use an example history with some preloaded messages:
await demoEphemeralChatMessageHistory.clear();await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("Hey there! I'm Nemo."));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Hello!"));await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("How are you today?"));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Fine thanks!"));await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Hey there! I'm Nemo.", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hey there! I'm Nemo.", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello!", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "How are you today?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "How are you today?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Fine thanks!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Fine thanks!", name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }]
Let’s use this message history with the `RunnableWithMessageHistory` chain we declared above:
const chainWithMessageHistory2 = new RunnableWithMessageHistory({ runnable: chain2, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});await chainWithMessageHistory2.invoke( { input: "What's my name?", }, { configurable: { sessionId: "unused" } });
AIMessage { lc_serializable: true, lc_kwargs: { content: "Your name is Nemo!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Your name is Nemo!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 6, promptTokens: 66, totalTokens: 72 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
We can see the chain remembers the preloaded name.
But let’s say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the `clear` method to remove messages and re-add them to the history. We don’t have to, but let’s put this method at the front of our chain to ensure it’s always called:
import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const trimMessages = async (_chainInput: Record<string, any>) => { const storedMessages = await demoEphemeralChatMessageHistory.getMessages(); if (storedMessages.length <= 2) { return false; } await demoEphemeralChatMessageHistory.clear(); for (const message of storedMessages.slice(-2)) { demoEphemeralChatMessageHistory.addMessage(message); } return true;};const chainWithTrimming = RunnableSequence.from([ RunnablePassthrough.assign({ messages_trimmed: trimMessages }), chainWithMessageHistory2,]);
Let’s call this new chain and check the messages afterwards:
await chainWithTrimming.invoke( { input: "Where does P. Sherman live?", }, { configurable: { sessionId: "unused" } });
AIMessage { lc_serializable: true, lc_kwargs: { content: 'P. Sherman is a fictional character who lives at 42 Wallaby Way, Sydney, from the movie "Finding Nem'... 3 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'P. Sherman is a fictional character who lives at 42 Wallaby Way, Sydney, from the movie "Finding Nem'... 3 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 26, promptTokens: 53, totalTokens: 79 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: "What's my name?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "What's my name?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Your name is Nemo!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Your name is Nemo!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 6, promptTokens: 66, totalTokens: 72 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Where does P. Sherman live?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Where does P. Sherman live?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'P. Sherman is a fictional character who lives at 42 Wallaby Way, Sydney, from the movie "Finding Nem'... 3 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'P. Sherman is a fictional character who lives at 42 Wallaby Way, Sydney, from the movie "Finding Nem'... 3 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 26, promptTokens: 53, totalTokens: 79 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [] }]
And we can see that our history has removed the two oldest messages while still adding the most recent conversation at the end. The next time the chain is called, `trimMessages` will be called again, and only the two most recent messages will be passed to the model. In this case, this means that the model will forget the name we gave it the next time we invoke it:
await chainWithTrimming.invoke( { input: "What is my name?", }, { configurable: { sessionId: "unused" } });
AIMessage { lc_serializable: true, lc_kwargs: { content: "I'm sorry, I don't have access to your personal information. Can I help you with anything else?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm sorry, I don't have access to your personal information. Can I help you with anything else?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 22, promptTokens: 73, totalTokens: 95 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Where does P. Sherman live?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Where does P. Sherman live?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'P. Sherman is a fictional character who lives at 42 Wallaby Way, Sydney, from the movie "Finding Nem'... 3 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'P. Sherman is a fictional character who lives at 42 Wallaby Way, Sydney, from the movie "Finding Nem'... 3 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 26, promptTokens: 53, totalTokens: 79 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "What is my name?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "What is my name?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "I'm sorry, I don't have access to your personal information. Can I help you with anything else?", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "I'm sorry, I don't have access to your personal information. Can I help you with anything else?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 22, promptTokens: 73, totalTokens: 95 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [] }]
### Summary memory[](#summary-memory "Direct link to Summary memory")
We can use this same pattern in other ways too. For example, we could use an additional LLM call to generate a summary of the conversation before calling our chain. Let’s recreate our chat history and chatbot chain:
await demoEphemeralChatMessageHistory.clear();await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("Hey there! I'm Nemo."));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Hello!"));await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("How are you today?"));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Fine thanks!"));
const runnableWithSummaryMemoryPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"],]);const summaryMemoryChain = runnableWithSummaryMemoryPrompt.pipe(llm);const chainWithMessageHistory3 = new RunnableWithMessageHistory({ runnable: summaryMemoryChain, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});
And now, let’s create a function that will distill previous interactions into a summary. We can add this one to the front of the chain too:
const summarizeMessages = async (_chainInput: Record<string, any>) => { const storedMessages = await demoEphemeralChatMessageHistory.getMessages(); if (storedMessages.length === 0) { return false; } const summarizationPrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("chat_history"), [ "user", "Distill the above chat messages into a single summary message. Include as many specific details as you can.", ], ]); const summarizationChain = summarizationPrompt.pipe(llm); const summaryMessage = await summarizationChain.invoke({ chat_history: storedMessages, }); await demoEphemeralChatMessageHistory.clear(); demoEphemeralChatMessageHistory.addMessage(summaryMessage); return true;};const chainWithSummarization = RunnableSequence.from([ RunnablePassthrough.assign({ messages_summarized: summarizeMessages, }), chainWithMessageHistory3,]);
Let’s see if it remembers the name we gave it:
await chainWithSummarization.invoke( { input: "What did I say my name was?", }, { configurable: { sessionId: "unused" }, });
AIMessage { lc_serializable: true, lc_kwargs: { content: 'You introduced yourself as "Nemo."', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'You introduced yourself as "Nemo."', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 8, promptTokens: 87, totalTokens: 95 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
await demoEphemeralChatMessageHistory.getMessages();
[ AIMessage { lc_serializable: true, lc_kwargs: { content: "The conversation consists of a greeting from someone named Nemo and a general inquiry about their we"... 86 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The conversation consists of a greeting from someone named Nemo and a general inquiry about their we"... 86 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 34, promptTokens: 62, totalTokens: 96 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "What did I say my name was?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "What did I say my name was?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: 'You introduced yourself as "Nemo."', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: 'You introduced yourself as "Nemo."', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 8, promptTokens: 87, totalTokens: 95 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: [] }]
Note that invoking the chain again will generate another summary generated from the initial summary plus new messages and so on. You could also design a hybrid approach where a certain number of messages are retained in chat history while others are summarized.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to manage memory in your chatbots
Next, check out some of the other guides in this section, such as [how to add retrieval to your chatbot](/v0.2/docs/how_to/chatbots_retrieval).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split by character
](/v0.2/docs/how_to/character_text_splitter)[
Next
How to do retrieval
](/v0.2/docs/how_to/chatbots_retrieval)
* [Setup](#setup)
* [Message passing](#message-passing)
* [Chat history](#chat-history)
* [Automatic history management](#automatic-history-management)
* [Modifying chat history](#modifying-chat-history)
* [Trimming messages](#trimming-messages)
* [Summary memory](#summary-memory)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/chatbots_retrieval | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do retrieval
On this page
How to do retrieval
===================
Prerequisites
This guide assumes familiarity with the following:
* [Chatbots](/v0.2/docs/tutorials/chatbot)
* [Retrieval-augmented generation](/v0.2/docs/tutorials/rag)
Retrieval is a common technique chatbots use to augment their responses with data outside a chat model’s training data. This section will cover how to implement retrieval in the context of chatbots, but it’s worth noting that retrieval is a very subtle and deep topic.
Setup[](#setup "Direct link to Setup")
---------------------------------------
You’ll need to install a few packages, and set any LLM API keys:
* npm
* yarn
* pnpm
npm i @langchain/openai cheerio
yarn add @langchain/openai cheerio
pnpm add @langchain/openai cheerio
Let’s also set up a chat model that we’ll use for the below examples.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
Creating a retriever[](#creating-a-retriever "Direct link to Creating a retriever")
------------------------------------------------------------------------------------
We’ll use [the LangSmith documentation](https://docs.smith.langchain.com) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/v0.2/docs/how_to/#qa-with-rag).
Let’s use a document loader to pull text from the docs:
import "cheerio";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const rawDocs = await loader.load();rawDocs[0].pageContent.length;
36687
Next, we split it into smaller chunks that the LLM’s context window can handle and store it in a vector database:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const allSplits = await textSplitter.splitDocuments(rawDocs);
Then we embed and store those chunks in a vector database:
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( allSplits, new OpenAIEmbeddings());
And finally, let’s create a retriever from our initialized vectorstore:
const retriever = vectorstore.asRetriever(4);const docs = await retriever.invoke("how can langsmith help with testing?");console.log(docs);
[ Document { pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 7, to: 7 } } } }, Document { pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test SetWhi"... 347 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 6, to: 6 } } } }, Document { pageContent: "will help in curation of test cases that can help track regressions/improvements and development of "... 393 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 11, to: 11 } } } }, Document { pageContent: "that time period — this is especially handy for debugging production issues.LangSmith also allows fo"... 396 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 11, to: 11 } } } }]
We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions. And now we’ve got a retriever that can return related data from the LangSmith docs!
Document chains[](#document-chains "Direct link to Document chains")
---------------------------------------------------------------------
Now that we have a retriever that can return LangChain docs, let’s create a chain that can use them as context to answer questions. We’ll use a `createStuffDocumentsChain` helper function to “stuff” all of the input documents into the prompt. It will also handle formatting the docs as strings.
In addition to a chat model, the function also expects a prompt that has a `context` variable, as well as a placeholder for chat history messages named `messages`. We’ll create an appropriate prompt and pass it as shown below:
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const SYSTEM_TEMPLATE = `Answer the user's questions based on the below context. If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":<context>{context}</context>`;const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], new MessagesPlaceholder("messages"),]);const documentChain = await createStuffDocumentsChain({ llm, prompt: questionAnsweringPrompt,});
We can invoke this `documentChain` by itself to answer questions. Let’s use the docs we retrieved above and the same question, `how can langsmith help with testing?`:
import { HumanMessage, AIMessage } from "@langchain/core/messages";await documentChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], context: docs,});
"Yes, LangSmith can help test your LLM applications. It allows developers to create datasets, which a"... 229 more characters
Looks good! For comparison, we can try it with no context docs and compare the result:
await documentChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], context: [],});
"I don't know."
We can see that the LLM does not return any results.
Retrieval chains[](#retrieval-chains "Direct link to Retrieval chains")
------------------------------------------------------------------------
Let’s combine this document chain with the retriever. Here’s one way this can look:
import type { BaseMessage } from "@langchain/core/messages";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const parseRetrieverInput = (params: { messages: BaseMessage[] }) => { return params.messages[params.messages.length - 1].content;};const retrievalChain = RunnablePassthrough.assign({ context: RunnableSequence.from([parseRetrieverInput, retriever]),}).assign({ answer: documentChain,});
Given a list of input messages, we extract the content of the last message in the list and pass that to the retriever to fetch some documents. Then, we pass those documents as context to our document chain to generate a final response.
Invoking this chain combines both steps outlined above:
await retrievalChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],});
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Can LangSmith help test my LLM applications?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Can LangSmith help test my LLM applications?", name: undefined, additional_kwargs: {}, response_metadata: {} } ], context: [ Document { pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test SetWhi"... 347 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "The ability to rapidly understand how the model is performing — and debug where it is failing — is i"... 138 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } } ], answer: "Yes, LangSmith can help test your LLM applications. It allows developers to create datasets, which a"... 297 more characters}
Looks good!
Query transformation[](#query-transformation "Direct link to Query transformation")
------------------------------------------------------------------------------------
Our retrieval chain is capable of answering questions about LangSmith, but there’s a problem - chatbots interact with users conversationally, and therefore have to deal with followup questions.
The chain in its current form will struggle with this. Consider a followup question to our original question like `Tell me more!`. If we invoke our retriever with that query directly, we get documents irrelevant to LLM application testing:
await retriever.invoke("Tell me more!");
[ Document { pageContent: "Oftentimes, changes in the prompt, retrieval strategy, or model choice can have huge implications in"... 40 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 8, to: 8 } } } }, Document { pageContent: "This allows you to quickly test out different prompts and models. You can open the playground from a"... 37 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 10, to: 10 } } } }, Document { pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test SetWhi"... 347 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 6, to: 6 } } } }, Document { pageContent: "together, making it easier to track the performance of and annotate your application across multiple"... 244 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: { from: 11, to: 11 } } } }]
This is because the retriever has no innate concept of state, and will only pull documents most similar to the query given. To solve this, we can transform the query into a standalone query without any external references an LLM.
Here’s an example:
const queryTransformPrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("messages"), [ "user", "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.", ],]);const queryTransformationChain = queryTransformPrompt.pipe(llm);await queryTransformationChain.invoke({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ],});
AIMessage { lc_serializable: true, lc_kwargs: { content: '"LangSmith LLM application testing and evaluation features"', tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: '"LangSmith LLM application testing and evaluation features"', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 11, promptTokens: 144, totalTokens: 155 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
Awesome! That transformed query would pull up context documents related to LLM application testing.
Let’s add this to our retrieval chain. We can wrap our retriever as follows:
import { RunnableBranch } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const queryTransformingRetrieverChain = RunnableBranch.from([ [ (params: { messages: BaseMessage[] }) => params.messages.length === 1, RunnableSequence.from([parseRetrieverInput, retriever]), ], queryTransformPrompt.pipe(llm).pipe(new StringOutputParser()).pipe(retriever),]).withConfig({ runName: "chat_retriever_chain" });
Then, we can use this query transformation chain to make our retrieval chain better able to handle such followup questions:
const conversationalRetrievalChain = RunnablePassthrough.assign({ context: queryTransformingRetrieverChain,}).assign({ answer: documentChain,});
Awesome! Let’s invoke this new chain with the same inputs as earlier:
await conversationalRetrievalChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],});
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Can LangSmith help test my LLM applications?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Can LangSmith help test my LLM applications?", name: undefined, additional_kwargs: {}, response_metadata: {} } ], context: [ Document { pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test SetWhi"... 347 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "The ability to rapidly understand how the model is performing — and debug where it is failing — is i"... 138 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } } ], answer: "Yes, LangSmith can help test your LLM applications. It allows developers to create datasets, which a"... 297 more characters}
await conversationalRetrievalChain.invoke({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ],});
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Can LangSmith help test my LLM applications?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Can LangSmith help test my LLM applications?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters, name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Tell me more!", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Tell me more!", name: undefined, additional_kwargs: {}, response_metadata: {} } ], context: [ Document { pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test SetWhi"... 347 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "will help in curation of test cases that can help track regressions/improvements and development of "... 393 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } } ], answer: "LangSmith supports a variety of workflows to aid in the development of your applications, from creat"... 607 more characters}
You can check out [this LangSmith trace](https://smith.langchain.com/public/dc4d6bd4-fea5-45df-be94-06ad18882ae9/r) to see the internal query transformation step for yourself.
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
Because this chain is constructed with LCEL, you can use familiar methods like `.stream()` with it:
const stream = await conversationalRetrievalChain.stream({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ],});for await (const chunk of stream) { console.log(chunk);}
{ messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "Can LangSmith help test my LLM applications?", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Can LangSmith help test my LLM applications?", name: undefined, additional_kwargs: {}, response_metadata: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters, name: undefined, additional_kwargs: {}, response_metadata: {}, tool_calls: [], invalid_tool_calls: [] }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Tell me more!", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Tell me more!", name: undefined, additional_kwargs: {}, response_metadata: {} } ]}{ context: [ Document { pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test SetWhi"... 347 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } }, Document { pageContent: "will help in curation of test cases that can help track regressions/improvements and development of "... 393 more characters, metadata: { source: "https://docs.smith.langchain.com/user_guide", loc: { lines: [Object] } } } ]}{ answer: "" }{ answer: "Lang" }{ answer: "Smith" }{ answer: " offers" }{ answer: " a" }{ answer: " comprehensive" }{ answer: " suite" }{ answer: " of" }{ answer: " tools" }{ answer: " and" }{ answer: " workflows" }{ answer: " to" }{ answer: " support" }{ answer: " the" }{ answer: " development" }{ answer: " and" }{ answer: " testing" }{ answer: " of" }{ answer: " L" }{ answer: "LM" }{ answer: " applications" }{ answer: "." }{ answer: " Here" }{ answer: " are" }{ answer: " some" }{ answer: " key" }{ answer: " features" }{ answer: " and" }{ answer: " functionalities" }{ answer: ":\n\n" }{ answer: "1" }{ answer: "." }{ answer: " **" }{ answer: "Test" }{ answer: " Case" }{ answer: " Management" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Bulk" }{ answer: " Upload" }{ answer: " and" }{ answer: " Creation" }{ answer: "**" }{ answer: ":" }{ answer: " You" }{ answer: " can" }{ answer: " upload" }{ answer: " test" }{ answer: " cases" }{ answer: " in" }{ answer: " bulk" }{ answer: "," }{ answer: " create" }{ answer: " them" }{ answer: " on" }{ answer: " the" }{ answer: " fly" }{ answer: "," }{ answer: " or" }{ answer: " export" }{ answer: " them" }{ answer: " from" }{ answer: " application" }{ answer: " traces" }{ answer: ".\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Datas" }{ answer: "ets" }{ answer: "**" }{ answer: ":" }{ answer: " Lang" }{ answer: "Smith" }{ answer: " allows" }{ answer: " you" }{ answer: " to" }{ answer: " create" }{ answer: " datasets" }{ answer: "," }{ answer: " which" }{ answer: " are" }{ answer: " collections" }{ answer: " of" }{ answer: " inputs" }{ answer: " and" }{ answer: " reference" }{ answer: " outputs" }{ answer: "." }{ answer: " These" }{ answer: " datasets" }{ answer: " can" }{ answer: " be" }{ answer: " used" }{ answer: " to" }{ answer: " run" }{ answer: " tests" }{ answer: " on" }{ answer: " your" }{ answer: " L" }{ answer: "LM" }{ answer: " applications" }{ answer: ".\n\n" }{ answer: "2" }{ answer: "." }{ answer: " **" }{ answer: "Custom" }{ answer: " Evalu" }{ answer: "ations" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "LL" }{ answer: "M" }{ answer: " and" }{ answer: " He" }{ answer: "uristic" }{ answer: " Based" }{ answer: "**" }{ answer: ":" }{ answer: " You" }{ answer: " can" }{ answer: " run" }{ answer: " custom" }{ answer: " evaluations" }{ answer: " using" }{ answer: " both" }{ answer: " L" }{ answer: "LM" }{ answer: "-based" }{ answer: " and" }{ answer: " heuristic" }{ answer: "-based" }{ answer: " methods" }{ answer: " to" }{ answer: " score" }{ answer: " test" }{ answer: " results" }{ answer: ".\n\n" }{ answer: "3" }{ answer: "." }{ answer: " **" }{ answer: "Comparison" }{ answer: " View" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Pro" }{ answer: "tot" }{ answer: "yp" }{ answer: "ing" }{ answer: " and" }{ answer: " Regression" }{ answer: " Tracking" }{ answer: "**" }{ answer: ":" }{ answer: " When" }{ answer: " prot" }{ answer: "otyping" }{ answer: " different" }{ answer: " versions" }{ answer: " of" }{ answer: " your" }{ answer: " applications" }{ answer: "," }{ answer: " Lang" }{ answer: "Smith" }{ answer: " provides" }{ answer: " a" }{ answer: " comparison" }{ answer: " view" }{ answer: " to" }{ answer: " see" }{ answer: " if" }{ answer: " there" }{ answer: " have" }{ answer: " been" }{ answer: " any" }{ answer: " regress" }{ answer: "ions" }{ answer: " with" }{ answer: " respect" }{ answer: " to" }{ answer: " your" }{ answer: " initial" }{ answer: " test" }{ answer: " cases" }{ answer: ".\n\n" }{ answer: "4" }{ answer: "." }{ answer: " **" }{ answer: "Native" }{ answer: " Rendering" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Chat" }{ answer: " Messages" }{ answer: "," }{ answer: " Functions" }{ answer: "," }{ answer: " and" }{ answer: " Documents" }{ answer: "**" }{ answer: ":" }{ answer: " Lang" }{ answer: "Smith" }{ answer: " provides" }{ answer: " native" }{ answer: " rendering" }{ answer: " of" }{ answer: " chat" }{ answer: " messages" }{ answer: "," }{ answer: " functions" }{ answer: "," }{ answer: " and" }{ answer: " retrieved" }{ answer: " documents" }{ answer: "," }{ answer: " making" }{ answer: " it" }{ answer: " easier" }{ answer: " to" }{ answer: " visualize" }{ answer: " and" }{ answer: " understand" }{ answer: " the" }{ answer: " outputs" }{ answer: ".\n\n" }{ answer: "5" }{ answer: "." }{ answer: " **" }{ answer: "Pro" }{ answer: "tot" }{ answer: "yp" }{ answer: "ing" }{ answer: " Support" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Quick" }{ answer: " Experiment" }{ answer: "ation" }{ answer: "**" }{ answer: ":" }{ answer: " The" }{ answer: " platform" }{ answer: " supports" }{ answer: " quick" }{ answer: " experimentation" }{ answer: " with" }{ answer: " different" }{ answer: " prompts" }{ answer: "," }{ answer: " model" }{ answer: " types" }{ answer: "," }{ answer: " retrieval" }{ answer: " strategies" }{ answer: "," }{ answer: " and" }{ answer: " other" }{ answer: " parameters" }{ answer: ".\n\n" }{ answer: "6" }{ answer: "." }{ answer: " **" }{ answer: "Feedback" }{ answer: " Capture" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Human" }{ answer: " Feedback" }{ answer: "**" }{ answer: ":" }{ answer: " When" }{ answer: " launching" }{ answer: " your" }{ answer: " application" }{ answer: " to" }{ answer: " an" }{ answer: " initial" }{ answer: " set" }{ answer: " of" }{ answer: " users" }{ answer: "," }{ answer: " Lang" }{ answer: "Smith" }{ answer: " allows" }{ answer: " you" }{ answer: " to" }{ answer: " gather" }{ answer: " human" }{ answer: " feedback" }{ answer: " on" }{ answer: " the" }{ answer: " responses" }{ answer: "." }{ answer: " This" }{ answer: " helps" }{ answer: " identify" }{ answer: " interesting" }{ answer: " runs" }{ answer: " and" }{ answer: " highlight" }{ answer: " edge" }{ answer: " cases" }{ answer: " causing" }{ answer: " problematic" }{ answer: " responses" }{ answer: ".\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Feedback" }{ answer: " Scores" }{ answer: "**" }{ answer: ":" }{ answer: " You" }{ answer: " can" }{ answer: " attach" }{ answer: " feedback" }{ answer: " scores" }{ answer: " to" }{ answer: " logged" }{ answer: " traces" }{ answer: "," }{ answer: " often" }{ answer: " integrated" }{ answer: " into" }{ answer: " the" }{ answer: " system" }{ answer: ".\n\n" }{ answer: "7" }{ answer: "." }{ answer: " **" }{ answer: "Monitoring" }{ answer: " and" }{ answer: " Troubles" }{ answer: "hooting" }{ answer: "**" }{ answer: ":\n" }{ answer: " " }{ answer: " -" }{ answer: " **" }{ answer: "Logging" }{ answer: " and" }{ answer: " Visualization" }{ answer: "**" }{ answer: ":" }{ answer: " Lang" }{ answer: "Smith" }{ answer: " logs" }{ answer: " all" }{ answer: " traces" }{ answer: "," }{ answer: " visual" }{ answer: "izes" }{ answer: " latency" }{ answer: " and" }{ answer: " token" }{ answer: " usage" }{ answer: " statistics" }{ answer: "," }{ answer: " and" }{ answer: " helps" }{ answer: " troubleshoot" }{ answer: " specific" }{ answer: " issues" }{ answer: " as" }{ answer: " they" }{ answer: " arise" }{ answer: ".\n\n" }{ answer: "Overall" }{ answer: "," }{ answer: " Lang" }{ answer: "Smith" }{ answer: " is" }{ answer: " designed" }{ answer: " to" }{ answer: " support" }{ answer: " the" }{ answer: " entire" }{ answer: " lifecycle" }{ answer: " of" }{ answer: " L" }{ answer: "LM" }{ answer: " application" }{ answer: " development" }{ answer: "," }{ answer: " from" }{ answer: " initial" }{ answer: " prot" }{ answer: "otyping" }{ answer: " to" }{ answer: " deployment" }{ answer: " and" }{ answer: " ongoing" }{ answer: " monitoring" }{ answer: "," }{ answer: " making" }{ answer: " it" }{ answer: " a" }{ answer: " powerful" }{ answer: " tool" }{ answer: " for" }{ answer: " developers" }{ answer: " looking" }{ answer: " to" }{ answer: " build" }{ answer: " and" }{ answer: " maintain" }{ answer: " high" }{ answer: "-quality" }{ answer: " L" }{ answer: "LM" }{ answer: " applications" }{ answer: "." }{ answer: "" }
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned some techniques for adding personal data as context to your chatbots.
This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out our [how to guides on retrieval](/v0.2/docs/how_to/#retrievers).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to manage memory
](/v0.2/docs/how_to/chatbots_memory)[
Next
How to use tools
](/v0.2/docs/how_to/chatbots_tools)
* [Setup](#setup)
* [Creating a retriever](#creating-a-retriever)
* [Document chains](#document-chains)
* [Retrieval chains](#retrieval-chains)
* [Query transformation](#query-transformation)
* [Streaming](#streaming)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/code_splitter | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split code
On this page
How to split code
=================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Text splitters](/v0.2/docs/concepts#text-splitters)
* [Recursively splitting text by character](/v0.2/docs/how_to/recursive_text_splitter)
[RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) includes pre-built lists of separators that are useful for splitting text in a specific programming language.
Supported languages include:
"html" | "cpp" | "go" | "java" | "js" | "php" | "proto" | "python" | "rst" | "ruby" | "rust" | "scala" | "swift" | "markdown" | "latex" | "sol"
To view the list of separators for a given language, pass one of the values from the list above into the `getSeparatorsForLanguage()` static method
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";RecursiveCharacterTextSplitter.getSeparatorsForLanguage("js");
[ "\nfunction ", "\nconst ", "\nlet ", "\nvar ", "\nclass ", "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", "\ndefault ", "\n\n", "\n", " ", ""]
JS[](#js "Direct link to JS")
------------------------------
Here’s an example using the JS text splitter:
const JS_CODE = `function helloWorld() { console.log("Hello, World!");}// Call the functionhelloWorld();`;const jsSplitter = RecursiveCharacterTextSplitter.fromLanguage("js", { chunkSize: 60, chunkOverlap: 0,});const jsDocs = await jsSplitter.createDocuments([JS_CODE]);jsDocs;
[ Document { pageContent: 'function helloWorld() {\n console.log("Hello, World!");\n}', metadata: { loc: { lines: { from: 2, to: 4 } } } }, Document { pageContent: "// Call the function\nhelloWorld();", metadata: { loc: { lines: { from: 6, to: 7 } } } }]
Python[](#python "Direct link to Python")
------------------------------------------
Here’s an example for Python:
const PYTHON_CODE = `def hello_world(): print("Hello, World!")# Call the functionhello_world()`;const pythonSplitter = RecursiveCharacterTextSplitter.fromLanguage("python", { chunkSize: 50, chunkOverlap: 0,});const pythonDocs = await pythonSplitter.createDocuments([PYTHON_CODE]);pythonDocs;
[ Document { pageContent: 'def hello_world():\n print("Hello, World!")', metadata: { loc: { lines: { from: 2, to: 3 } } } }, Document { pageContent: "# Call the function\nhello_world()", metadata: { loc: { lines: { from: 5, to: 6 } } } }]
Markdown[](#markdown "Direct link to Markdown")
------------------------------------------------
Here’s an example of splitting on markdown separators:
const markdownText = `# 🦜️🔗 LangChain⚡ Building applications with LLMs through composability ⚡## Quick Install\`\`\`bash# Hopefully this code block isn't splitpip install langchain\`\`\`As an open-source project in a rapidly developing field, we are extremely open to contributions.`;const mdSplitter = RecursiveCharacterTextSplitter.fromLanguage("markdown", { chunkSize: 60, chunkOverlap: 0,});const mdDocs = await mdSplitter.createDocuments([markdownText]);mdDocs;
[ Document { pageContent: "# 🦜️🔗 LangChain", metadata: { loc: { lines: { from: 2, to: 2 } } } }, Document { pageContent: "⚡ Building applications with LLMs through composability ⚡", metadata: { loc: { lines: { from: 4, to: 4 } } } }, Document { pageContent: "## Quick Install", metadata: { loc: { lines: { from: 6, to: 6 } } } }, Document { pageContent: "```bash\n# Hopefully this code block isn't split", metadata: { loc: { lines: { from: 8, to: 9 } } } }, Document { pageContent: "pip install langchain", metadata: { loc: { lines: { from: 10, to: 10 } } } }, Document { pageContent: "```", metadata: { loc: { lines: { from: 11, to: 11 } } } }, Document { pageContent: "As an open-source project in a rapidly developing field, we", metadata: { loc: { lines: { from: 13, to: 13 } } } }, Document { pageContent: "are extremely open to contributions.", metadata: { loc: { lines: { from: 13, to: 13 } } } }]
Latex[](#latex "Direct link to Latex")
---------------------------------------
Here’s an example on Latex text:
const latexText = `\documentclass{article}\begin{document}\maketitle\section{Introduction}Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\subsection{History of LLMs}The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\subsection{Applications of LLMs}LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\end{document}`;const latexSplitter = RecursiveCharacterTextSplitter.fromLanguage("latex", { chunkSize: 60, chunkOverlap: 0,});const latexDocs = await latexSplitter.createDocuments([latexText]);latexDocs;
[ Document { pageContent: "documentclass{article}\n\n\begin{document}\n\nmaketitle", metadata: { loc: { lines: { from: 2, to: 6 } } } }, Document { pageContent: "section{Introduction}", metadata: { loc: { lines: { from: 8, to: 8 } } } }, Document { pageContent: "Large language models (LLMs) are a type of machine learning", metadata: { loc: { lines: { from: 9, to: 9 } } } }, Document { pageContent: "model that can be trained on vast amounts of text data to", metadata: { loc: { lines: { from: 9, to: 9 } } } }, Document { pageContent: "generate human-like language. In recent years, LLMs have", metadata: { loc: { lines: { from: 9, to: 9 } } } }, Document { pageContent: "made significant advances in a variety of natural language", metadata: { loc: { lines: { from: 9, to: 9 } } } }, Document { pageContent: "processing tasks, including language translation, text", metadata: { loc: { lines: { from: 9, to: 9 } } } }, Document { pageContent: "generation, and sentiment analysis.", metadata: { loc: { lines: { from: 9, to: 9 } } } }, Document { pageContent: "subsection{History of LLMs}", metadata: { loc: { lines: { from: 11, to: 11 } } } }, Document { pageContent: "The earliest LLMs were developed in the 1980s and 1990s,", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "but they were limited by the amount of data that could be", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "processed and the computational power available at the", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "time. In the past decade, however, advances in hardware and", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "software have made it possible to train LLMs on massive", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "datasets, leading to significant improvements in", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "performance.", metadata: { loc: { lines: { from: 12, to: 12 } } } }, Document { pageContent: "subsection{Applications of LLMs}", metadata: { loc: { lines: { from: 14, to: 14 } } } }, Document { pageContent: "LLMs have many applications in industry, including", metadata: { loc: { lines: { from: 15, to: 15 } } } }, Document { pageContent: "chatbots, content creation, and virtual assistants. They", metadata: { loc: { lines: { from: 15, to: 15 } } } }, Document { pageContent: "can also be used in academia for research in linguistics,", metadata: { loc: { lines: { from: 15, to: 15 } } } }, Document { pageContent: "psychology, and computational linguistics.", metadata: { loc: { lines: { from: 15, to: 15 } } } }, Document { pageContent: "end{document}", metadata: { loc: { lines: { from: 17, to: 17 } } } }]
HTML[](#html "Direct link to HTML")
------------------------------------
Here’s an example using an HTML text splitter:
const htmlText = `<!DOCTYPE html><html> <head> <title>🦜️🔗 LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head> <body> <div> <h1>🦜️🔗 LangChain</h1> <p>⚡ Building applications with LLMs through composability ⚡</p> </div> <div> As an open-source project in a rapidly developing field, we are extremely open to contributions. </div> </body></html>`;const htmlSplitter = RecursiveCharacterTextSplitter.fromLanguage("html", { chunkSize: 60, chunkOverlap: 0,});const htmlDocs = await htmlSplitter.createDocuments([htmlText]);htmlDocs;
[ Document { pageContent: "<!DOCTYPE html>\n<html>", metadata: { loc: { lines: { from: 2, to: 3 } } } }, Document { pageContent: "<head>\n <title>🦜️🔗 LangChain</title>", metadata: { loc: { lines: { from: 4, to: 5 } } } }, Document { pageContent: "<style>\n body {\n font-family:", metadata: { loc: { lines: { from: 6, to: 8 } } } }, Document { pageContent: "Arial, sans-serif;\n }\n h1 {", metadata: { loc: { lines: { from: 8, to: 10 } } } }, Document { pageContent: "color: darkblue;\n }\n </style>", metadata: { loc: { lines: { from: 11, to: 13 } } } }, Document { pageContent: "</head>", metadata: { loc: { lines: { from: 14, to: 14 } } } }, Document { pageContent: "<body>", metadata: { loc: { lines: { from: 15, to: 15 } } } }, Document { pageContent: "<div>\n <h1>🦜️🔗 LangChain</h1>", metadata: { loc: { lines: { from: 16, to: 17 } } } }, Document { pageContent: "<p>⚡ Building applications with LLMs through composability", metadata: { loc: { lines: { from: 18, to: 18 } } } }, Document { pageContent: "⚡</p>\n </div>", metadata: { loc: { lines: { from: 18, to: 19 } } } }, Document { pageContent: "<div>\n As an open-source project in a rapidly", metadata: { loc: { lines: { from: 20, to: 21 } } } }, Document { pageContent: "developing field, we are extremely open to contributions.", metadata: { loc: { lines: { from: 21, to: 21 } } } }, Document { pageContent: "</div>\n </body>\n</html>", metadata: { loc: { lines: { from: 22, to: 24 } } } }]
Solidity[](#solidity "Direct link to Solidity")
------------------------------------------------
Here’s an example using of splitting on [Solidity](https://soliditylang.org/) code:
const SOL_CODE = `pragma solidity ^0.8.20;contract HelloWorld { function add(uint a, uint b) pure public returns(uint) { return a + b; }}`;const solSplitter = RecursiveCharacterTextSplitter.fromLanguage("sol", { chunkSize: 128, chunkOverlap: 0,});const solDocs = await solSplitter.createDocuments([SOL_CODE]);solDocs;
[ Document { pageContent: "pragma solidity ^0.8.20;", metadata: { loc: { lines: { from: 2, to: 2 } } } }, Document { pageContent: "contract HelloWorld {\n" + " function add(uint a, uint b) pure public returns(uint) {\n" + " return a + "... 9 more characters, metadata: { loc: { lines: { from: 3, to: 7 } } } }]
PHP[](#php "Direct link to PHP")
---------------------------------
Here’s an example of splitting on PHP code:
const PHP_CODE = `<?phpnamespace foo;class Hello { public function __construct() { }}function hello() { echo "Hello World!";}interface Human { public function breath();}trait Foo { }enum Color{ case Red; case Blue;}`;const phpSplitter = RecursiveCharacterTextSplitter.fromLanguage("php", { chunkSize: 50, chunkOverlap: 0,});const phpDocs = await phpSplitter.createDocuments([PHP_CODE]);phpDocs;
[ Document { pageContent: "<?php\nnamespace foo;", metadata: { loc: { lines: { from: 1, to: 2 } } } }, Document { pageContent: "class Hello {", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "public function __construct() { }\n}", metadata: { loc: { lines: { from: 4, to: 5 } } } }, Document { pageContent: 'function hello() {\n echo "Hello World!";\n}', metadata: { loc: { lines: { from: 6, to: 8 } } } }, Document { pageContent: "interface Human {\n public function breath();\n}", metadata: { loc: { lines: { from: 9, to: 11 } } } }, Document { pageContent: "trait Foo { }\nenum Color\n{\n case Red;", metadata: { loc: { lines: { from: 12, to: 15 } } } }, Document { pageContent: "case Blue;\n}", metadata: { loc: { lines: { from: 16, to: 17 } } } }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned a method for splitting text on code-specific separators.
Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use tools
](/v0.2/docs/how_to/chatbots_tools)[
Next
How to do retrieval with contextual compression
](/v0.2/docs/how_to/contextual_compression)
* [JS](#js)
* [Python](#python)
* [Markdown](#markdown)
* [Latex](#latex)
* [HTML](#html)
* [Solidity](#solidity)
* [PHP](#php)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/introduction/#__docusaurus_skipToContent_fallback | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Introduction
On this page
Introduction
============
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
* **Development**: Build your applications using LangChain's open-source [building blocks](/v0.2/docs/how_to/#langchain-expression-language-lcel) and [components](/v0.2/docs/how_to/). Hit the ground running using [third-party integrations](/v0.2/docs/integrations/platforms/).
* **Productionization**: Use [LangSmith](/v0.2/docs/langsmith/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
* **Deployment**: Turn any chain into an API with [LangServe](https://www.langchain.com/langserve).
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview")
Concretely, the framework consists of the following open-source libraries:
* **`@langchain/core`**: Base abstractions and LangChain Expression Language.
* **`@langchain/community`**: Third party integrations.
* Partner packages (e.g. **`@langchain/openai`**, **`@langchain/anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`@langchain/core`**.
* **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
* **[langgraph](https://www.langchain.com/langserveh)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
* **[LangSmith](/v0.2/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
note
These docs focus on the JavaScript LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library.
[Tutorials](/v0.2/docs/tutorials)[](#tutorials "Direct link to tutorials")
---------------------------------------------------------------------------
If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials](/v0.2/docs/tutorials). This is the best place to get started.
These are the best ones to get started with:
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
Explore the full list of tutorials [here](/v0.2/docs/tutorials).
[How-To Guides](/v0.2/docs/how_to/)[](#how-to-guides "Direct link to how-to-guides")
-------------------------------------------------------------------------------------
[Here](/v0.2/docs/how_to/) you'll find short answers to “How do I….?” types of questions. These how-to guides don't cover topics in depth - you'll find that material in the [Tutorials](/v0.2/docs/tutorials) and the [API Reference](https://v02.api.js.langchain.com). However, these guides will help you quickly accomplish common tasks.
[Conceptual Guide](/v0.2/docs/concepts)[](#conceptual-guide "Direct link to conceptual-guide")
-----------------------------------------------------------------------------------------------
Introductions to all the key parts of LangChain you'll need to know! [Here](/v0.2/docs/concepts) you'll find high level explanations of all LangChain concepts.
[API reference](https://v02.api.js.langchain.com)[](#api-reference "Direct link to api-reference")
---------------------------------------------------------------------------------------------------
Head to the reference section for full documentation of all classes and methods in the LangChain Python packages.
Ecosystem[](#ecosystem "Direct link to Ecosystem")
---------------------------------------------------
### [🦜🛠️ LangSmith](/v0.2/docs/langsmith)[](#️-langsmith "Direct link to ️-langsmith")
Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
### [🦜🕸️ LangGraph](/v0.2/docs/langgraph)[](#️-langgraph "Direct link to ️-langgraph")
Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
Additional resources[](#additional-resources "Direct link to Additional resources")
------------------------------------------------------------------------------------
[Security](/v0.2/docs/security)[](#security "Direct link to security")
-----------------------------------------------------------------------
Read up on our [Security](/v0.2/docs/security) best practices to make sure you're developing safely with LangChain.
### [Integrations](/v0.2/docs/integrations/platforms/)[](#integrations "Direct link to integrations")
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.2/docs/integrations/platforms/).
### [Contributing](/v0.2/docs/contributing)[](#contributing "Direct link to contributing")
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Next
Tutorials
](/v0.2/docs/tutorials/)
* [Tutorials](#tutorials)
* [How-To Guides](#how-to-guides)
* [Conceptual Guide](#conceptual-guide)
* [API reference](#api-reference)
* [Ecosystem](#ecosystem)
* [🦜🛠️ LangSmith](#️-langsmith)
* [🦜🕸️ LangGraph](#️-langgraph)
* [Additional resources](#additional-resources)
* [Security](#security)
* [Integrations](#integrations)
* [Contributing](#contributing)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/get_started/introduction/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Get started](/v0.1/docs/get_started/)
* Introduction
On this page
Introduction
============
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
* **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
* **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
This framework consists of several parts.
* **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
* **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. (_Python only_)
* **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API. (_Python only_)
* **[LangSmith](https://smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
![LangChain Diagram](/v0.1/assets/images/langchain_stack_feb_2024-101939844004a99c1b676723fc0ee5e9.webp)
Together, these products simplify the entire application lifecycle:
* **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
* **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
* **Deploy**: Turn any chain into an API with LangServe.
LangChain Libraries[](#langchain-libraries "Direct link to LangChain Libraries")
---------------------------------------------------------------------------------
The main value props of the LangChain packages are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
[Here's](/v0.1/docs/get_started/installation/) how to install LangChain, set up your environment, and start building.
We recommend following our [Quickstart](/v0.1/docs/get_started/quickstart/) guide to familiarize yourself with the framework by building your first LangChain application.
Read up on our [Security](/v0.1/docs/security/) best practices to make sure you're developing safely with LangChain.
note
These docs focus on the JS/TS LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library.
LangChain Expression Language (LCEL)[](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
----------------------------------------------------------------------------------------------------------------------------------
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
* **[Overview](/v0.1/docs/expression_language/)**: LCEL and its benefits
* **[Interface](/v0.1/docs/expression_language/interface/)**: The standard interface for LCEL objects
* **[How-to](/v0.1/docs/expression_language/how_to/routing/)**: Key features of LCEL
* **[Cookbook](/v0.1/docs/expression_language/cookbook/)**: Example code for accomplishing common tasks
Modules[](#modules "Direct link to Modules")
---------------------------------------------
LangChain provides standard, extendable interfaces and integrations for the following modules:
#### [Model I/O](/v0.1/docs/modules/model_io/)[](#model-io "Direct link to model-io")
Interface with language models
#### [Retrieval](/v0.1/docs/modules/data_connection/)[](#retrieval "Direct link to retrieval")
Interface with application-specific data
#### [Agents](/v0.1/docs/modules/agents/)[](#agents "Direct link to agents")
Let models choose which tools to use given high-level directives
Examples, ecosystem, and resources[](#examples-ecosystem-and-resources "Direct link to Examples, ecosystem, and resources")
----------------------------------------------------------------------------------------------------------------------------
### [Use cases](/v0.1/docs/use_cases/)[](#use-cases "Direct link to use-cases")
Walkthroughs and techniques for common end-to-end use cases, like:
* [Document question answering](/v0.1/docs/use_cases/question_answering/)
* [RAG](/v0.1/docs/use_cases/question_answering/)
* [Agents](/v0.1/docs/use_cases/autonomous_agents/)
* and much more...
### [Integrations](/v0.1/docs/integrations/platforms/)[](#integrations "Direct link to integrations")
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.1/docs/integrations/platforms/).
### [API reference](https://api.js.langchain.com)[](#api-reference "Direct link to api-reference")
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental packages.
### [Developer's guide](/v0.1/docs/contributing/)[](#developers-guide "Direct link to developers-guide")
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
### [Community](/v0.1/docs/community/)[](#community "Direct link to community")
Head to the [Community navigator](/v0.1/docs/community/) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM's.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Get started
](/v0.1/docs/get_started/)[
Next
Installation
](/v0.1/docs/get_started/installation/)
* [LangChain Libraries](#langchain-libraries)
* [Get started](#get-started)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Modules](#modules)
* [Examples, ecosystem, and resources](#examples-ecosystem-and-resources)
* [Use cases](#use-cases)
* [Integrations](#integrations)
* [API reference](#api-reference)
* [Developer's guide](#developers-guide)
* [Community](#community)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/platforms/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* Providers
On this page
Providers
=========
LangChain integrates with many providers.
Partner Packages[](#partner-packages "Direct link to Partner Packages")
------------------------------------------------------------------------
These providers have standalone `@langchain/{provider}` packages for improved versioning, dependency management and testing.
* [Anthropic](https://www.npmjs.com/package/@langchain/anthropic)
* [Azure OpenAI](https://www.npmjs.com/package/@langchain/azure-openai)
* [Cloudflare](https://www.npmjs.com/package/@langchain/cloudflare)
* [Cohere](https://www.npmjs.com/package/@langchain/cohere)
* [Exa](https://www.npmjs.com/package/@langchain/exa)
* [Google GenAI](https://www.npmjs.com/package/@langchain/google-genai)
* [Google VertexAI](https://www.npmjs.com/package/@langchain/google-vertexai)
* [Google VertexAI Web](https://www.npmjs.com/package/@langchain/google-vertexai-web)
* [Groq](https://www.npmjs.com/package/@langchain/groq)
* [MistralAI](https://www.npmjs.com/package/@langchain/mistralai)
* [MongoDB](https://www.npmjs.com/package/@langchain/mongodb)
* [Nomic](https://www.npmjs.com/package/@langchain/nomic)
* [OpenAI](https://www.npmjs.com/package/@langchain/openai)
* [Pinecone](https://www.npmjs.com/package/@langchain/pinecone)
* [Qdrant](https://www.npmjs.com/package/@langchain/qdrant)
* [Redis](https://www.npmjs.com/package/@langchain/redis)
* [Weaviate](https://www.npmjs.com/package/@langchain/weaviate)
* [Yandex](https://www.npmjs.com/package/@langchain/yandex)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Next
Providers
](/v0.2/docs/integrations/platforms/)
* [Partner Packages](#partner-packages)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/people/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
People
======
There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐!
This page highlights a few of those folks who have dedicated their time to the open-source repo in the form of direct contributions and reviews.
Top reviewers[](#top-reviewers "Direct link to Top reviewers")
---------------------------------------------------------------
As LangChain has grown, the amount of surface area that maintainers cover has grown as well.
Thank you to the following folks who have gone above and beyond in reviewing incoming PRs 🙏!
[![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg)
[![Avatar for sullivan-sean](https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4)](https://github.com/sullivan-sean)[@sullivan-sean](https://github.com/sullivan-sean)
[![Avatar for ppramesi](https://avatars.githubusercontent.com/u/6775031?v=4)](https://github.com/ppramesi)[@ppramesi](https://github.com/ppramesi)
[![Avatar for jacobrosenthal](https://avatars.githubusercontent.com/u/455796?v=4)](https://github.com/jacobrosenthal)[@jacobrosenthal](https://github.com/jacobrosenthal)
[![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo)
[![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep)
Top recent contributors[](#top-recent-contributors "Direct link to Top recent contributors")
---------------------------------------------------------------------------------------------
The list below contains contributors who have had the most PRs merged in the last three months, weighted (imperfectly) by impact.
Thank you all so much for your time and efforts in making LangChain better ❤️!
[![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg)
[![Avatar for sinedied](https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4)](https://github.com/sinedied)[@sinedied](https://github.com/sinedied)
[![Avatar for lokesh-couchbase](https://avatars.githubusercontent.com/u/113521973?v=4)](https://github.com/lokesh-couchbase)[@lokesh-couchbase](https://github.com/lokesh-couchbase)
[![Avatar for nicoloboschi](https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4)](https://github.com/nicoloboschi)[@nicoloboschi](https://github.com/nicoloboschi)
[![Avatar for MJDeligan](https://avatars.githubusercontent.com/u/48515433?v=4)](https://github.com/MJDeligan)[@MJDeligan](https://github.com/MJDeligan)
[![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo)
[![Avatar for lukywong](https://avatars.githubusercontent.com/u/1433871?v=4)](https://github.com/lukywong)[@lukywong](https://github.com/lukywong)
[![Avatar for rahilvora](https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4)](https://github.com/rahilvora)[@rahilvora](https://github.com/rahilvora)
[![Avatar for davidfant](https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4)](https://github.com/davidfant)[@davidfant](https://github.com/davidfant)
[![Avatar for easwee](https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4)](https://github.com/easwee)[@easwee](https://github.com/easwee)
[![Avatar for fahreddinozcan](https://avatars.githubusercontent.com/u/88107904?v=4)](https://github.com/fahreddinozcan)[@fahreddinozcan](https://github.com/fahreddinozcan)
[![Avatar for karol-f](https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4)](https://github.com/karol-f)[@karol-f](https://github.com/karol-f)
[![Avatar for janvi-kalra](https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4)](https://github.com/janvi-kalra)[@janvi-kalra](https://github.com/janvi-kalra)
[![Avatar for Anush008](https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4)](https://github.com/Anush008)[@Anush008](https://github.com/Anush008)
[![Avatar for cinqisap](https://avatars.githubusercontent.com/u/158295355?v=4)](https://github.com/cinqisap)[@cinqisap](https://github.com/cinqisap)
[![Avatar for andrewnguonly](https://avatars.githubusercontent.com/u/7654246?u=b8599019655adaada3cdc3c3006798df42c44494&v=4)](https://github.com/andrewnguonly)[@andrewnguonly](https://github.com/andrewnguonly)
[![Avatar for seuha516](https://avatars.githubusercontent.com/u/79067549?u=de7a2688cb44010afafd055d707f3463585494df&v=4)](https://github.com/seuha516)[@seuha516](https://github.com/seuha516)
[![Avatar for jasonnathan](https://avatars.githubusercontent.com/u/780157?u=d5efec16b5e3a9913dc44967059a70d9a610755d&v=4)](https://github.com/jasonnathan)[@jasonnathan](https://github.com/jasonnathan)
[![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep)
[![Avatar for jeasonnow](https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4)](https://github.com/jeasonnow)[@jeasonnow](https://github.com/jeasonnow)
Core maintainers[](#core-maintainers "Direct link to Core maintainers")
------------------------------------------------------------------------
Hello there 👋!
We're LangChain's core maintainers. If you've spent time in the community, you've probably crossed paths with at least one of us already.
[![Avatar for jacoblee93](https://avatars.githubusercontent.com/u/6952323?u=d785f9406c5a78ebd75922567b2693fb643c3bb0&v=4)](https://github.com/jacoblee93)[@jacoblee93](https://github.com/jacoblee93)
[![Avatar for hwchase17](https://avatars.githubusercontent.com/u/11986836?u=f4c4f21a82b2af6c9f91e1f1d99ea40062f7a101&v=4)](https://github.com/hwchase17)[@hwchase17](https://github.com/hwchase17)
[![Avatar for bracesproul](https://avatars.githubusercontent.com/u/46789226?u=83f467441c4b542b900fe2bb8fe45e26bf918da0&v=4)](https://github.com/bracesproul)[@bracesproul](https://github.com/bracesproul)
[![Avatar for dqbd](https://avatars.githubusercontent.com/u/1443449?u=fe32372ae8f497065ef0a1c54194d9dff36fb81d&v=4)](https://github.com/dqbd)[@dqbd](https://github.com/dqbd)
[![Avatar for nfcampos](https://avatars.githubusercontent.com/u/56902?u=fdb30e802c68bc338dd9c0820f713e4fdac75db7&v=4)](https://github.com/nfcampos)[@nfcampos](https://github.com/nfcampos)
Top all-time contributors[](#top-all-time-contributors "Direct link to Top all-time contributors")
---------------------------------------------------------------------------------------------------
And finally, this is an all-time list of all-stars who have made significant contributions to the framework 🌟:
[![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg)
[![Avatar for ppramesi](https://avatars.githubusercontent.com/u/6775031?v=4)](https://github.com/ppramesi)[@ppramesi](https://github.com/ppramesi)
[![Avatar for jacobrosenthal](https://avatars.githubusercontent.com/u/455796?v=4)](https://github.com/jacobrosenthal)[@jacobrosenthal](https://github.com/jacobrosenthal)
[![Avatar for sullivan-sean](https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4)](https://github.com/sullivan-sean)[@sullivan-sean](https://github.com/sullivan-sean)
[![Avatar for skarard](https://avatars.githubusercontent.com/u/602085?u=f8a9736cfa9fe8875d19861b0276e24de8f3d0a0&v=4)](https://github.com/skarard)[@skarard](https://github.com/skarard)
[![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo)
[![Avatar for chasemcdo](https://avatars.githubusercontent.com/u/74692158?u=9c25a170d24cc30f10eafc4d44a38067cdf5eed8&v=4)](https://github.com/chasemcdo)[@chasemcdo](https://github.com/chasemcdo)
[![Avatar for MaximeThoonsen](https://avatars.githubusercontent.com/u/4814551?u=efb35c6a7dc1ce99dfa8ac8f0f1314cdb4fddfe1&v=4)](https://github.com/MaximeThoonsen)[@MaximeThoonsen](https://github.com/MaximeThoonsen)
[![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep)
[![Avatar for sinedied](https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4)](https://github.com/sinedied)[@sinedied](https://github.com/sinedied)
[![Avatar for ysnows](https://avatars.githubusercontent.com/u/11255869?u=b0b519b6565c43d01795ba092521c8677f30134c&v=4)](https://github.com/ysnows)[@ysnows](https://github.com/ysnows)
[![Avatar for tyumentsev4](https://avatars.githubusercontent.com/u/56769451?u=088102b6160822bc68c25a2a5df170080d0b16a2&v=4)](https://github.com/tyumentsev4)[@tyumentsev4](https://github.com/tyumentsev4)
[![Avatar for nickscamara](https://avatars.githubusercontent.com/u/20311743?u=29bf2391ae34297a12a88d813731b0bdf289e4a5&v=4)](https://github.com/nickscamara)[@nickscamara](https://github.com/nickscamara)
[![Avatar for nigel-daniels](https://avatars.githubusercontent.com/u/4641452?v=4)](https://github.com/nigel-daniels)[@nigel-daniels](https://github.com/nigel-daniels)
[![Avatar for MJDeligan](https://avatars.githubusercontent.com/u/48515433?v=4)](https://github.com/MJDeligan)[@MJDeligan](https://github.com/MJDeligan)
[![Avatar for malandis](https://avatars.githubusercontent.com/u/3690240?v=4)](https://github.com/malandis)[@malandis](https://github.com/malandis)
[![Avatar for danielchalef](https://avatars.githubusercontent.com/u/131175?u=332fe36f12d9ffe9e4414dc776b381fe801a9c53&v=4)](https://github.com/danielchalef)[@danielchalef](https://github.com/danielchalef)
[![Avatar for easwee](https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4)](https://github.com/easwee)[@easwee](https://github.com/easwee)
[![Avatar for kwkr](https://avatars.githubusercontent.com/u/20127759?v=4)](https://github.com/kwkr)[@kwkr](https://github.com/kwkr)
[![Avatar for ewfian](https://avatars.githubusercontent.com/u/12423122?u=681de0c470e9b349963ee935ddfd6b2e097e7181&v=4)](https://github.com/ewfian)[@ewfian](https://github.com/ewfian)
[![Avatar for Swimburger](https://avatars.githubusercontent.com/u/3382717?u=5a84a173b0e80effc9161502c0848bf06c84bde9&v=4)](https://github.com/Swimburger)[@Swimburger](https://github.com/Swimburger)
[![Avatar for mfortman11](https://avatars.githubusercontent.com/u/6100513?u=c758a02fc05dc36315fcfadfccd6208883436cb8&v=4)](https://github.com/mfortman11)[@mfortman11](https://github.com/mfortman11)
[![Avatar for jasondotparse](https://avatars.githubusercontent.com/u/13938372?u=0e3f80aa515c41b7d9084b73d761cad378ebdc7a&v=4)](https://github.com/jasondotparse)[@jasondotparse](https://github.com/jasondotparse)
[![Avatar for kristianfreeman](https://avatars.githubusercontent.com/u/922353?u=ad00df1efd8f04a469de6087ee3cd7d7012533f7&v=4)](https://github.com/kristianfreeman)[@kristianfreeman](https://github.com/kristianfreeman)
[![Avatar for neebdev](https://avatars.githubusercontent.com/u/94310799?u=b6f604bc6c3a6380f0b83025ca94e2e22179ac2a&v=4)](https://github.com/neebdev)[@neebdev](https://github.com/neebdev)
[![Avatar for tsg](https://avatars.githubusercontent.com/u/101817?u=39f31ff29d2589046148c6ed1c1c923982d86b1a&v=4)](https://github.com/tsg)[@tsg](https://github.com/tsg)
[![Avatar for lokesh-couchbase](https://avatars.githubusercontent.com/u/113521973?v=4)](https://github.com/lokesh-couchbase)[@lokesh-couchbase](https://github.com/lokesh-couchbase)
[![Avatar for nicoloboschi](https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4)](https://github.com/nicoloboschi)[@nicoloboschi](https://github.com/nicoloboschi)
[![Avatar for zackproser](https://avatars.githubusercontent.com/u/1769996?u=3555434bbfa99f2267f30ded67a15132e3a7bd27&v=4)](https://github.com/zackproser)[@zackproser](https://github.com/zackproser)
[![Avatar for justindra](https://avatars.githubusercontent.com/u/4289486?v=4)](https://github.com/justindra)[@justindra](https://github.com/justindra)
[![Avatar for vincelwt](https://avatars.githubusercontent.com/u/5092466?u=713f9947e4315b6f0ef62ec5cccd978133006783&v=4)](https://github.com/vincelwt)[@vincelwt](https://github.com/vincelwt)
[![Avatar for cwoolum](https://avatars.githubusercontent.com/u/942415?u=8210ef711d1666ec234db9a0c4a9b32fd9f36593&v=4)](https://github.com/cwoolum)[@cwoolum](https://github.com/cwoolum)
[![Avatar for sunner](https://avatars.githubusercontent.com/u/255413?v=4)](https://github.com/sunner)[@sunner](https://github.com/sunner)
[![Avatar for lukywong](https://avatars.githubusercontent.com/u/1433871?v=4)](https://github.com/lukywong)[@lukywong](https://github.com/lukywong)
[![Avatar for mayooear](https://avatars.githubusercontent.com/u/107035552?u=708ca9b002559f6175803a80a1e47f3e84ba91e2&v=4)](https://github.com/mayooear)[@mayooear](https://github.com/mayooear)
[![Avatar for chitalian](https://avatars.githubusercontent.com/u/26822232?u=accedd106a5e9d8335cb631c1bfe84b8cc494083&v=4)](https://github.com/chitalian)[@chitalian](https://github.com/chitalian)
[![Avatar for rahilvora](https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4)](https://github.com/rahilvora)[@rahilvora](https://github.com/rahilvora)
[![Avatar for paaatrrrick](https://avatars.githubusercontent.com/u/88113528?u=23275c7b8928a38b34195358ea9f4d057fe1e171&v=4)](https://github.com/paaatrrrick)[@paaatrrrick](https://github.com/paaatrrrick)
[![Avatar for alexleventer](https://avatars.githubusercontent.com/u/3254549?u=794d178a761379e162a1092c556e98a9ec5c2410&v=4)](https://github.com/alexleventer)[@alexleventer](https://github.com/alexleventer)
[![Avatar for 3eif](https://avatars.githubusercontent.com/u/29833473?u=37b8f7a25883ee98bc6b6bd6029c6d5479724e2f&v=4)](https://github.com/3eif)[@3eif](https://github.com/3eif)
[![Avatar for BitVoyagerMan](https://avatars.githubusercontent.com/u/121993229?u=717ed7012c040d5bf3a8ff1fd695a6a4f1ff0626&v=4)](https://github.com/BitVoyagerMan)[@BitVoyagerMan](https://github.com/BitVoyagerMan)
[![Avatar for xixixao](https://avatars.githubusercontent.com/u/1473433?u=c4bf1cf9f8699c8647894cd226c0bf9124bdad58&v=4)](https://github.com/xixixao)[@xixixao](https://github.com/xixixao)
[![Avatar for jo32](https://avatars.githubusercontent.com/u/501632?u=a714d65c000d8f489f9fc2363f9a372b0dba05e3&v=4)](https://github.com/jo32)[@jo32](https://github.com/jo32)
[![Avatar for RohitMidha23](https://avatars.githubusercontent.com/u/38888530?u=5c4b99eff970e551e5b756f270aa5234bc666316&v=4)](https://github.com/RohitMidha23)[@RohitMidha23](https://github.com/RohitMidha23)
[![Avatar for karol-f](https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4)](https://github.com/karol-f)[@karol-f](https://github.com/karol-f)
[![Avatar for konstantinov-raft](https://avatars.githubusercontent.com/u/105433902?v=4)](https://github.com/konstantinov-raft)[@konstantinov-raft](https://github.com/konstantinov-raft)
[![Avatar for volodymyr-memsql](https://avatars.githubusercontent.com/u/57520563?v=4)](https://github.com/volodymyr-memsql)[@volodymyr-memsql](https://github.com/volodymyr-memsql)
[![Avatar for jameshfisher](https://avatars.githubusercontent.com/u/166966?u=b78059abca798fbce8c9da4f6ddfb72ea03b20bb&v=4)](https://github.com/jameshfisher)[@jameshfisher](https://github.com/jameshfisher)
[![Avatar for the-powerpointer](https://avatars.githubusercontent.com/u/134403026?u=ddd77b62b35c5497ae3d846f8917bdd81e5ef19e&v=4)](https://github.com/the-powerpointer)[@the-powerpointer](https://github.com/the-powerpointer)
[![Avatar for davidfant](https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4)](https://github.com/davidfant)[@davidfant](https://github.com/davidfant)
[![Avatar for MthwRobinson](https://avatars.githubusercontent.com/u/1635179?u=0631cb84ca580089198114f94d9c27efe730220e&v=4)](https://github.com/MthwRobinson)[@MthwRobinson](https://github.com/MthwRobinson)
[![Avatar for mishushakov](https://avatars.githubusercontent.com/u/10400064?u=581d97314df325c15ec221f64834003d3bba5cc1&v=4)](https://github.com/mishushakov)[@mishushakov](https://github.com/mishushakov)
[![Avatar for SimonPrammer](https://avatars.githubusercontent.com/u/44960995?u=a513117a60e9f1aa09247ec916018ee272897169&v=4)](https://github.com/SimonPrammer)[@SimonPrammer](https://github.com/SimonPrammer)
[![Avatar for munkhorgil](https://avatars.githubusercontent.com/u/978987?u=eff77a6f7bc4edbace4929731638d4727923013f&v=4)](https://github.com/munkhorgil)[@munkhorgil](https://github.com/munkhorgil)
[![Avatar for alx13](https://avatars.githubusercontent.com/u/1572864?v=4)](https://github.com/alx13)[@alx13](https://github.com/alx13)
[![Avatar for castroCrea](https://avatars.githubusercontent.com/u/20707343?u=25e872c764bd31b71148f2dec896f64be5e034ff&v=4)](https://github.com/castroCrea)[@castroCrea](https://github.com/castroCrea)
[![Avatar for samheutmaker](https://avatars.githubusercontent.com/u/1767032?u=a50f2b3b339eb965b9c812977aa10d64202e2e95&v=4)](https://github.com/samheutmaker)[@samheutmaker](https://github.com/samheutmaker)
[![Avatar for archie-swif](https://avatars.githubusercontent.com/u/2158707?u=8a0aeee45e93ba575321804a7b709bf8897941de&v=4)](https://github.com/archie-swif)[@archie-swif](https://github.com/archie-swif)
[![Avatar for fahreddinozcan](https://avatars.githubusercontent.com/u/88107904?v=4)](https://github.com/fahreddinozcan)[@fahreddinozcan](https://github.com/fahreddinozcan)
[![Avatar for valdo99](https://avatars.githubusercontent.com/u/41517614?u=ba37c9a21db3068953ae50d90c1cd07c3dec3abd&v=4)](https://github.com/valdo99)[@valdo99](https://github.com/valdo99)
[![Avatar for gmpetrov](https://avatars.githubusercontent.com/u/4693180?u=8cf781d9099d6e2f2d2caf7612a5c2811ba13ef8&v=4)](https://github.com/gmpetrov)[@gmpetrov](https://github.com/gmpetrov)
[![Avatar for mattzcarey](https://avatars.githubusercontent.com/u/77928207?u=fc8febe2a4b67384046eb4041b325bb34665d59c&v=4)](https://github.com/mattzcarey)[@mattzcarey](https://github.com/mattzcarey)
[![Avatar for albertpurnama](https://avatars.githubusercontent.com/u/14824254?u=b3acdfc46d3d26d44f66a7312b102172c7ff9722&v=4)](https://github.com/albertpurnama)[@albertpurnama](https://github.com/albertpurnama)
[![Avatar for yroc92](https://avatars.githubusercontent.com/u/17517541?u=7405432fa828c094e130e8193be3cae04ac96d11&v=4)](https://github.com/yroc92)[@yroc92](https://github.com/yroc92)
[![Avatar for Basti-an](https://avatars.githubusercontent.com/u/42387209?u=43ac44545861ce4adec99f973aeea3e6cf9a1bc0&v=4)](https://github.com/Basti-an)[@Basti-an](https://github.com/Basti-an)
[![Avatar for CarlosZiegler](https://avatars.githubusercontent.com/u/38855507?u=65c19ae772581fb7367f646ed90be44311e60e70&v=4)](https://github.com/CarlosZiegler)[@CarlosZiegler](https://github.com/CarlosZiegler)
[![Avatar for iloveitaly](https://avatars.githubusercontent.com/u/150855?v=4)](https://github.com/iloveitaly)[@iloveitaly](https://github.com/iloveitaly)
[![Avatar for dilling](https://avatars.githubusercontent.com/u/5846912?v=4)](https://github.com/dilling)[@dilling](https://github.com/dilling)
[![Avatar for anselm94](https://avatars.githubusercontent.com/u/9033201?u=e5f657c3a1657c089d7cb88121e544ae7212e6f1&v=4)](https://github.com/anselm94)[@anselm94](https://github.com/anselm94)
[![Avatar for sarangan12](https://avatars.githubusercontent.com/u/602456?u=d39962c60b0ac5fea4e97cb67433a42c736c3c5b&v=4)](https://github.com/sarangan12)[@sarangan12](https://github.com/sarangan12)
[![Avatar for gramliu](https://avatars.githubusercontent.com/u/24856195?u=9f55337506cdcac3146772c56b4634e6b46a5e46&v=4)](https://github.com/gramliu)[@gramliu](https://github.com/gramliu)
[![Avatar for jeffchuber](https://avatars.githubusercontent.com/u/891664?u=722172a0061f68ab22819fa88a354ec973f70a63&v=4)](https://github.com/jeffchuber)[@jeffchuber](https://github.com/jeffchuber)
[![Avatar for ywkim](https://avatars.githubusercontent.com/u/588581?u=df702e5b817a56476cb0cd8e7587b9be844d2850&v=4)](https://github.com/ywkim)[@ywkim](https://github.com/ywkim)
[![Avatar for jirimoravcik](https://avatars.githubusercontent.com/u/951187?u=e80c215810058f57145042d12360d463e3a53443&v=4)](https://github.com/jirimoravcik)[@jirimoravcik](https://github.com/jirimoravcik)
[![Avatar for janvi-kalra](https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4)](https://github.com/janvi-kalra)[@janvi-kalra](https://github.com/janvi-kalra)
[![Avatar for Anush008](https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4)](https://github.com/Anush008)[@Anush008](https://github.com/Anush008)
[![Avatar for yuku](https://avatars.githubusercontent.com/u/96157?v=4)](https://github.com/yuku)[@yuku](https://github.com/yuku)
[![Avatar for conroywhitney](https://avatars.githubusercontent.com/u/249891?u=36703ce68261be59109622877012be08fbc090da&v=4)](https://github.com/conroywhitney)[@conroywhitney](https://github.com/conroywhitney)
[![Avatar for Czechh](https://avatars.githubusercontent.com/u/4779936?u=ab072503433effc18c071b31adda307988877d5e&v=4)](https://github.com/Czechh)[@Czechh](https://github.com/Czechh)
[![Avatar for adam101](https://avatars.githubusercontent.com/u/1535782?v=4)](https://github.com/adam101)[@adam101](https://github.com/adam101)
[![Avatar for jaclar](https://avatars.githubusercontent.com/u/362704?u=52d868cc75c793fa895ef7035ae45516bd915e84&v=4)](https://github.com/jaclar)[@jaclar](https://github.com/jaclar)
[![Avatar for ivoneijr](https://avatars.githubusercontent.com/u/6401435?u=96c11b6333636bd784ffbff72998591f3b3f087b&v=4)](https://github.com/ivoneijr)[@ivoneijr](https://github.com/ivoneijr)
[![Avatar for tonisives](https://avatars.githubusercontent.com/u/1083534?v=4)](https://github.com/tonisives)[@tonisives](https://github.com/tonisives)
[![Avatar for Njuelle](https://avatars.githubusercontent.com/u/3192870?u=e126aae39f36565450ebc854b35c6e890b705e71&v=4)](https://github.com/Njuelle)[@Njuelle](https://github.com/Njuelle)
[![Avatar for Roland0511](https://avatars.githubusercontent.com/u/588050?u=3c91917389117ee84843d961252ab7a2b9097e0e&v=4)](https://github.com/Roland0511)[@Roland0511](https://github.com/Roland0511)
[![Avatar for SebastjanPrachovskij](https://avatars.githubusercontent.com/u/86522260?u=66898c89771c7b8ff38958e9fb9563a1cf7f8004&v=4)](https://github.com/SebastjanPrachovskij)[@SebastjanPrachovskij](https://github.com/SebastjanPrachovskij)
[![Avatar for cinqisap](https://avatars.githubusercontent.com/u/158295355?v=4)](https://github.com/cinqisap)[@cinqisap](https://github.com/cinqisap)
[![Avatar for dylanintech](https://avatars.githubusercontent.com/u/86082012?u=6516bbf39c5af198123d8ed2e35fff5d200f4d2e&v=4)](https://github.com/dylanintech)[@dylanintech](https://github.com/dylanintech)
[![Avatar for andrewnguonly](https://avatars.githubusercontent.com/u/7654246?u=b8599019655adaada3cdc3c3006798df42c44494&v=4)](https://github.com/andrewnguonly)[@andrewnguonly](https://github.com/andrewnguonly)
[![Avatar for ShaunBaker](https://avatars.githubusercontent.com/u/1176557?u=c2e8ecfb45b736fc4d3bbfe182e26936bd519fd3&v=4)](https://github.com/ShaunBaker)[@ShaunBaker](https://github.com/ShaunBaker)
[![Avatar for machulav](https://avatars.githubusercontent.com/u/2857712?u=6809bef8bf07c46b39cd2fcd6027ed86e76372cd&v=4)](https://github.com/machulav)[@machulav](https://github.com/machulav)
[![Avatar for dersia](https://avatars.githubusercontent.com/u/1537958?u=5da46ca1cd93c6fed927c612fc454ba51d0a36b1&v=4)](https://github.com/dersia)[@dersia](https://github.com/dersia)
[![Avatar for joshsny](https://avatars.githubusercontent.com/u/7135900?u=109e43c5e906a8ecc1a2d465c4457f5cf29328a5&v=4)](https://github.com/joshsny)[@joshsny](https://github.com/joshsny)
[![Avatar for jl4nz](https://avatars.githubusercontent.com/u/94814971?u=266358610eeb54c3393dc127718dd6a997fdbf52&v=4)](https://github.com/jl4nz)[@jl4nz](https://github.com/jl4nz)
[![Avatar for eactisgrosso](https://avatars.githubusercontent.com/u/2279003?u=d122874eedb211359d4bf0119877d74ea7d5bcab&v=4)](https://github.com/eactisgrosso)[@eactisgrosso](https://github.com/eactisgrosso)
[![Avatar for frankolson](https://avatars.githubusercontent.com/u/6773706?u=738775762205a07fd7de297297c99f781e957c58&v=4)](https://github.com/frankolson)[@frankolson](https://github.com/frankolson)
[![Avatar for uthmanmoh](https://avatars.githubusercontent.com/u/83053931?u=5c715d2d4f6786fa749276de8eced710be8bfa99&v=4)](https://github.com/uthmanmoh)[@uthmanmoh](https://github.com/uthmanmoh)
[![Avatar for Jordan-Gilliam](https://avatars.githubusercontent.com/u/25993686?u=319a6ed2119197d4d11301614a104ae686f9fc70&v=4)](https://github.com/Jordan-Gilliam)[@Jordan-Gilliam](https://github.com/Jordan-Gilliam)
[![Avatar for winor30](https://avatars.githubusercontent.com/u/12413150?u=691a5e076bdd8c9e9fd637a41496b29e11b0c82f&v=4)](https://github.com/winor30)[@winor30](https://github.com/winor30)
[![Avatar for willemmulder](https://avatars.githubusercontent.com/u/70933?u=206fafc72fd14b4291cb29269c5e1cc8081d043b&v=4)](https://github.com/willemmulder)[@willemmulder](https://github.com/willemmulder)
[![Avatar for aixgeek](https://avatars.githubusercontent.com/u/9697715?u=d139c5568375c2472ac6142325e6856cd766d88d&v=4)](https://github.com/aixgeek)[@aixgeek](https://github.com/aixgeek)
[![Avatar for seuha516](https://avatars.githubusercontent.com/u/79067549?u=de7a2688cb44010afafd055d707f3463585494df&v=4)](https://github.com/seuha516)[@seuha516](https://github.com/seuha516)
[![Avatar for mhart](https://avatars.githubusercontent.com/u/367936?v=4)](https://github.com/mhart)[@mhart](https://github.com/mhart)
[![Avatar for mvaker](https://avatars.githubusercontent.com/u/5671913?u=2e237cb1dd51f9d0dd01f0deb80003163641fc49&v=4)](https://github.com/mvaker)[@mvaker](https://github.com/mvaker)
[![Avatar for vitaly-ps](https://avatars.githubusercontent.com/u/141448200?u=a3902a9c11399c916f1af2bf0ead901e7afe1a67&v=4)](https://github.com/vitaly-ps)[@vitaly-ps](https://github.com/vitaly-ps)
[![Avatar for cbh123](https://avatars.githubusercontent.com/u/14149230?u=ca710ca2a64391470163ddef6b5ea7633ab26872&v=4)](https://github.com/cbh123)[@cbh123](https://github.com/cbh123)
[![Avatar for Neverland3124](https://avatars.githubusercontent.com/u/52025513?u=865e861a1abb0d78be587f685d28fe8a00aee8fe&v=4)](https://github.com/Neverland3124)[@Neverland3124](https://github.com/Neverland3124)
[![Avatar for jasonnathan](https://avatars.githubusercontent.com/u/780157?u=d5efec16b5e3a9913dc44967059a70d9a610755d&v=4)](https://github.com/jasonnathan)[@jasonnathan](https://github.com/jasonnathan)
[![Avatar for Maanethdesilva](https://avatars.githubusercontent.com/u/94875583?v=4)](https://github.com/Maanethdesilva)[@Maanethdesilva](https://github.com/Maanethdesilva)
[![Avatar for fuleinist](https://avatars.githubusercontent.com/u/1163738?v=4)](https://github.com/fuleinist)[@fuleinist](https://github.com/fuleinist)
[![Avatar for kwadhwa18](https://avatars.githubusercontent.com/u/6015244?u=a127081404b8dc16ac0e84a869dfff4ac82bbab2&v=4)](https://github.com/kwadhwa18)[@kwadhwa18](https://github.com/kwadhwa18)
[![Avatar for jeasonnow](https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4)](https://github.com/jeasonnow)[@jeasonnow](https://github.com/jeasonnow)
[![Avatar for sousousore1](https://avatars.githubusercontent.com/u/624438?v=4)](https://github.com/sousousore1)[@sousousore1](https://github.com/sousousore1)
[![Avatar for seth-25](https://avatars.githubusercontent.com/u/49222652?u=203c2bef6cbb77668a289b8272aea4fb654558d5&v=4)](https://github.com/seth-25)[@seth-25](https://github.com/seth-25)
[![Avatar for tomi-mercado](https://avatars.githubusercontent.com/u/60221771?u=f8c1214535e402b0ff5c3428bfe98b586b517106&v=4)](https://github.com/tomi-mercado)[@tomi-mercado](https://github.com/tomi-mercado)
[![Avatar for JHeidinga](https://avatars.githubusercontent.com/u/1702015?u=fa33fb709707e2429f10fbb824abead61628d50c&v=4)](https://github.com/JHeidinga)[@JHeidinga](https://github.com/JHeidinga)
[![Avatar for niklas-lohmann](https://avatars.githubusercontent.com/u/68230177?v=4)](https://github.com/niklas-lohmann)[@niklas-lohmann](https://github.com/niklas-lohmann)
[![Avatar for Durisvk](https://avatars.githubusercontent.com/u/8467003?u=f07b8c070eaed3ad8972be4f4ca91afb1ae6e2c0&v=4)](https://github.com/Durisvk)[@Durisvk](https://github.com/Durisvk)
[![Avatar for BjoernRave](https://avatars.githubusercontent.com/u/36173920?u=c3acae11221a037c16254e2187555ea6259d89c3&v=4)](https://github.com/BjoernRave)[@BjoernRave](https://github.com/BjoernRave)
[![Avatar for qalqi](https://avatars.githubusercontent.com/u/1781048?u=837879a7e62c6b3736dc39a31ff42873bee2c532&v=4)](https://github.com/qalqi)[@qalqi](https://github.com/qalqi)
[![Avatar for katarinasupe](https://avatars.githubusercontent.com/u/61758502?u=20cdcb0bae81b9eb330c94f7cfae462327785219&v=4)](https://github.com/katarinasupe)[@katarinasupe](https://github.com/katarinasupe)
[![Avatar for andrewlei](https://avatars.githubusercontent.com/u/1158058?v=4)](https://github.com/andrewlei)[@andrewlei](https://github.com/andrewlei)
[![Avatar for floomby](https://avatars.githubusercontent.com/u/3113021?v=4)](https://github.com/floomby)[@floomby](https://github.com/floomby)
[![Avatar for milanjrodd](https://avatars.githubusercontent.com/u/121220673?u=55636f26ea48e77e0372008089ff2c38691eaa0a&v=4)](https://github.com/milanjrodd)[@milanjrodd](https://github.com/milanjrodd)
[![Avatar for NickMandylas](https://avatars.githubusercontent.com/u/19514618?u=95f8c29ed06696260722c2c6aa7bac3a1136d7a2&v=4)](https://github.com/NickMandylas)[@NickMandylas](https://github.com/NickMandylas)
[![Avatar for DravenCat](https://avatars.githubusercontent.com/u/55412122?v=4)](https://github.com/DravenCat)[@DravenCat](https://github.com/DravenCat)
[![Avatar for Alireza29675](https://avatars.githubusercontent.com/u/2771377?u=65ec71f9860ac2610e1cb5028173f67713a174d7&v=4)](https://github.com/Alireza29675)[@Alireza29675](https://github.com/Alireza29675)
[![Avatar for zhengxs2018](https://avatars.githubusercontent.com/u/7506913?u=42c32ca59ae2e44532cd45027e5b62d2712cf2a2&v=4)](https://github.com/zhengxs2018)[@zhengxs2018](https://github.com/zhengxs2018)
[![Avatar for clemenspeters](https://avatars.githubusercontent.com/u/13015002?u=059c556d90a2e5639dee42123077d51223c190f0&v=4)](https://github.com/clemenspeters)[@clemenspeters](https://github.com/clemenspeters)
[![Avatar for cmtoomey](https://avatars.githubusercontent.com/u/12201602?u=ea5cbb8d158980f6050dd41ae41b7f72e0a47337&v=4)](https://github.com/cmtoomey)[@cmtoomey](https://github.com/cmtoomey)
[![Avatar for igorshapiro](https://avatars.githubusercontent.com/u/1085209?u=16b60724316a7ed8e8b52af576c121215461922a&v=4)](https://github.com/igorshapiro)[@igorshapiro](https://github.com/igorshapiro)
[![Avatar for ezynda3](https://avatars.githubusercontent.com/u/5308871?v=4)](https://github.com/ezynda3)[@ezynda3](https://github.com/ezynda3)
[![Avatar for more-by-more](https://avatars.githubusercontent.com/u/67614844?u=d3d818efb3e3e2ddda589d6157f853922a460f5b&v=4)](https://github.com/more-by-more)[@more-by-more](https://github.com/more-by-more)
[![Avatar for noble-varghese](https://avatars.githubusercontent.com/u/109506617?u=c1d2a1813c51bff89bfa85d533633ed4c201ba2e&v=4)](https://github.com/noble-varghese)[@noble-varghese](https://github.com/noble-varghese)
[![Avatar for SananR](https://avatars.githubusercontent.com/u/14956384?u=538ff9bf09497059b312067333f68eba75594802&v=4)](https://github.com/SananR)[@SananR](https://github.com/SananR)
[![Avatar for fraserxu](https://avatars.githubusercontent.com/u/1183541?v=4)](https://github.com/fraserxu)[@fraserxu](https://github.com/fraserxu)
[![Avatar for ashvardanian](https://avatars.githubusercontent.com/u/1983160?u=536f2558c6ac33b74a6d89520dcb27ba46954070&v=4)](https://github.com/ashvardanian)[@ashvardanian](https://github.com/ashvardanian)
[![Avatar for adeelehsan](https://avatars.githubusercontent.com/u/8156837?u=99cacfbd962ff58885bdf68e5fc640fc0d3cb87c&v=4)](https://github.com/adeelehsan)[@adeelehsan](https://github.com/adeelehsan)
[![Avatar for henriquegdantas](https://avatars.githubusercontent.com/u/12974790?u=80d76f256a7854da6ae441b6ee078119877398e7&v=4)](https://github.com/henriquegdantas)[@henriquegdantas](https://github.com/henriquegdantas)
[![Avatar for evad1n](https://avatars.githubusercontent.com/u/50718218?u=ee35784971ef8dcdfdb25cfe0a8284ca48724938&v=4)](https://github.com/evad1n)[@evad1n](https://github.com/evad1n)
[![Avatar for benjibc](https://avatars.githubusercontent.com/u/1585539?u=654a21985c875f78a20eda7e4884e8d64de86fba&v=4)](https://github.com/benjibc)[@benjibc](https://github.com/benjibc)
[![Avatar for P-E-B](https://avatars.githubusercontent.com/u/38215315?u=3985b6a3ecb0e8338c5912ea9e20787152d0ad7a&v=4)](https://github.com/P-E-B)[@P-E-B](https://github.com/P-E-B)
[![Avatar for omikader](https://avatars.githubusercontent.com/u/16735699?u=29fc7c7c777c3cabc22449b68bbb01fe2fa0b574&v=4)](https://github.com/omikader)[@omikader](https://github.com/omikader)
[![Avatar for jasongill](https://avatars.githubusercontent.com/u/241711?v=4)](https://github.com/jasongill)[@jasongill](https://github.com/jasongill)
[![Avatar for puigde](https://avatars.githubusercontent.com/u/83642160?u=7e76b13b7484e4601bea47dc6e238c89d453a24d&v=4)](https://github.com/puigde)[@puigde](https://github.com/puigde)
[![Avatar for chase-crumbaugh](https://avatars.githubusercontent.com/u/90289500?u=0129550ecfbb4a92922fff7a406566a47a23dfb0&v=4)](https://github.com/chase-crumbaugh)[@chase-crumbaugh](https://github.com/chase-crumbaugh)
[![Avatar for Zeneos](https://avatars.githubusercontent.com/u/95008961?v=4)](https://github.com/Zeneos)[@Zeneos](https://github.com/Zeneos)
[![Avatar for joseanu](https://avatars.githubusercontent.com/u/2730127?u=9fe1d593bd63c7f116b9c46e9cbd359a2e4304f0&v=4)](https://github.com/joseanu)[@joseanu](https://github.com/joseanu)
[![Avatar for JackFener](https://avatars.githubusercontent.com/u/20380671?u=b51d10b71850203e6360655fa59cc679c5a498e6&v=4)](https://github.com/JackFener)[@JackFener](https://github.com/JackFener)
[![Avatar for swyxio](https://avatars.githubusercontent.com/u/6764957?u=97ad815028595b73b06ee4b0510e66bbe391228d&v=4)](https://github.com/swyxio)[@swyxio](https://github.com/swyxio)
[![Avatar for pczekaj](https://avatars.githubusercontent.com/u/1460539?u=24c2db4a29757f608a54a062340a466cad843825&v=4)](https://github.com/pczekaj)[@pczekaj](https://github.com/pczekaj)
[![Avatar for devinburnette](https://avatars.githubusercontent.com/u/13012689?u=7b68c67ea1bbc272c35be7c0bcf1c66a04554179&v=4)](https://github.com/devinburnette)[@devinburnette](https://github.com/devinburnette)
[![Avatar for ananis25](https://avatars.githubusercontent.com/u/16446513?u=5026326ed39bfee8325c30cdbd24ac20519d21b8&v=4)](https://github.com/ananis25)[@ananis25](https://github.com/ananis25)
[![Avatar for joaopcm](https://avatars.githubusercontent.com/u/58827242?u=3e03812a1074f2ce888b751c48e78a849c7e0aff&v=4)](https://github.com/joaopcm)[@joaopcm](https://github.com/joaopcm)
[![Avatar for SalehHindi](https://avatars.githubusercontent.com/u/15721377?u=37fadd6a7bf9dfa63ceb866bda23ca44a7b2c0c2&v=4)](https://github.com/SalehHindi)[@SalehHindi](https://github.com/SalehHindi)
[![Avatar for cmanou](https://avatars.githubusercontent.com/u/683160?u=e9050e4341c2c9d46b035ea17ea94234634e1b2c&v=4)](https://github.com/cmanou)[@cmanou](https://github.com/cmanou)
[![Avatar for micahriggan](https://avatars.githubusercontent.com/u/3626473?u=508e8c831d8eb804e95985d5191a08c761544fad&v=4)](https://github.com/micahriggan)[@micahriggan](https://github.com/micahriggan)
[![Avatar for w00ing](https://avatars.githubusercontent.com/u/29723695?u=7673821119377d98bba457451719483302147cfa&v=4)](https://github.com/w00ing)[@w00ing](https://github.com/w00ing)
[![Avatar for ardsh](https://avatars.githubusercontent.com/u/23664687?u=158ef7e156a7881b8647ece63683aca2c28f132e&v=4)](https://github.com/ardsh)[@ardsh](https://github.com/ardsh)
[![Avatar for JoeABCDEF](https://avatars.githubusercontent.com/u/39638510?u=f5fac0a3578572817b37a6dfc00adacb705ec7d0&v=4)](https://github.com/JoeABCDEF)[@JoeABCDEF](https://github.com/JoeABCDEF)
[![Avatar for saul-jb](https://avatars.githubusercontent.com/u/2025187?v=4)](https://github.com/saul-jb)[@saul-jb](https://github.com/saul-jb)
[![Avatar for JTCorrin](https://avatars.githubusercontent.com/u/73115680?v=4)](https://github.com/JTCorrin)[@JTCorrin](https://github.com/JTCorrin)
[![Avatar for zandko](https://avatars.githubusercontent.com/u/37948383?u=04ccf6e060b27e39c931c2608381351cf236a28f&v=4)](https://github.com/zandko)[@zandko](https://github.com/zandko)
[![Avatar for federicoestevez](https://avatars.githubusercontent.com/u/10424147?v=4)](https://github.com/federicoestevez)[@federicoestevez](https://github.com/federicoestevez)
[![Avatar for martinseanhunt](https://avatars.githubusercontent.com/u/65744?u=ddac1e773828d8058a40bca680cf549e955f69ae&v=4)](https://github.com/martinseanhunt)[@martinseanhunt](https://github.com/martinseanhunt)
[![Avatar for functorism](https://avatars.githubusercontent.com/u/17207277?u=4df9bc30a55b4da4b3d6fd20a2956afd722bde24&v=4)](https://github.com/functorism)[@functorism](https://github.com/functorism)
[![Avatar for erictt](https://avatars.githubusercontent.com/u/9592198?u=567fa49c73e824525d33eefd836ece16ab9964c8&v=4)](https://github.com/erictt)[@erictt](https://github.com/erictt)
[![Avatar for lesters](https://avatars.githubusercontent.com/u/5798036?u=4eba31d63c3818d17fb8f9aa923599ac63ebfea8&v=4)](https://github.com/lesters)[@lesters](https://github.com/lesters)
[![Avatar for my8bit](https://avatars.githubusercontent.com/u/782268?u=d83da3e6269d53a828bbeb6d661049a1ed185cb0&v=4)](https://github.com/my8bit)[@my8bit](https://github.com/my8bit)
[![Avatar for erhant](https://avatars.githubusercontent.com/u/16037166?u=9d056a2f5059684620e22aa4d880e38183309b51&v=4)](https://github.com/erhant)[@erhant](https://github.com/erhant)
We're so thankful for your support!
And one more thank you to [@tiangolo](https://github.com/tiangolo) for inspiration via FastAPI's [excellent people page](https://fastapi.tiangolo.com/fastapi-people).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/community | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
Community navigator
===================
Hi! Thanks for being here. We're lucky to have a community of so many passionate developers building with LangChain–we have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each other's work, become each other's customers and collaborators, and so much more.
Whether you're new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction.
* **🦜 Contribute to LangChain**
* **🌍 Meetups, Events, and Hackathons**
* **📣 Help Us Amplify Your Work**
* **💬 Stay in the loop**
🦜 Contribute to LangChain
==========================
LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is **still** so much to do together. Here are some ways to get involved:
* **[Open a pull request](https://github.com/langchain-ai/langchainjs/issues):** we'd appreciate all forms of contributions–new features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, we'd love to work on it with you.
* **[Read our contributor guidelines:](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
* **Become an expert:** our experts help the community by answering product questions in Discord. If that's a role you'd like to play, we'd be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at [hello@langchain.dev](mailto:hello@langchain.dev) and we'll take it from there!
* **Integrate with LangChain:** if your product integrates with LangChain–or aspires to–we want to help make sure the experience is as smooth as possible for you and end users. Send us an email at [hello@langchain.dev](mailto:hello@langchain.dev) and tell us what you're working on.
* **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at [hello@langchain.dev](mailto:hello@langchain.dev) if you'd like to explore this role.
🌍 Meetups, Events, and Hackathons
==================================
One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible!
* **Find a meetup, hackathon, or webinar:** you can find the one for you on on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f).
* **Submit an event to our calendar:** email us at [events@langchain.dev](mailto:events@langchain.dev) with a link to your event page! We can also help you spread the word with our local communities.
* **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at [events@langchain.dev](mailto:events@langchain.dev) to tell us about your event!
* **Become a meetup sponsor:** we often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If you'd like to help, send us an email to [events@langchain.dev](mailto:events@langchain.dev) we can share more about how it works!
* **Speak at an event:** meetup hosts are always looking for great speakers, presenters, and panelists. If you'd like to do that at an event, send us an email to [hello@langchain.dev](mailto:hello@langchain.dev) with more information about yourself, what you want to talk about, and what city you're based in and we'll try to match you with an upcoming event!
* **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at [hello@langchain.dev](mailto:hello@langchain.dev) and let us know how we can help.
📣 Help Us Amplify Your Work
============================
If you're working on something you're proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off.
* **Post about your work and mention us:** we love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), we'll almost certainly see it and can show you some love.
* **Publish something on our blog:** if you're writing about your experience building with LangChain, we'd love to post (or crosspost) it on our blog! E-mail [hello@langchain.dev](mailto:hello@langchain.dev) with a draft of your post! Or even an idea for something you want to write about.
* **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at [hello@langchain.dev](mailto:hello@langchain.dev).
☀️ Stay in the loop
===================
Here's where our team hangs out, talks shop, spotlights cool work, and shares what we're up to. We'd love to see you there too.
* **[Twitter](https://twitter.com/LangChainAI):** we post about what we're working on and what cool things we're seeing in the space. If you tag @langchainai in your post, we'll almost certainly see it, and can snow you some love!
* **[Discord](https://discord.gg/6adMQxSpJS):** connect with with >30k developers who are building with LangChain
* **[GitHub](https://github.com/langchain-ai/langchainjs):** open pull requests, contribute to a discussion, and/or contribute
* **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/additional_resources/tutorials | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
On this page
Tutorials
=========
Below are links to tutorials and courses on LangChain.js. For written guides on common use cases for LangChain.js, check out the [tutorials](/v0.2/docs/tutorials/) and [how to](/v0.2/docs/how_to/) sections.
* * *
Deeplearning.ai[](#deeplearningai "Direct link to Deeplearning.ai")
--------------------------------------------------------------------
We've partnered with [Deeplearning.ai](https://deeplearning.ai) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng) on a LangChain.js short course.
It covers LCEL and other building blocks you can combine to build more complex chains, as well as fundamentals around loading data for retrieval augmented generation (RAG). Try it for free below:
* [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js)
Scrimba interactive guides[](#scrimba-interactive-guides "Direct link to Scrimba interactive guides")
------------------------------------------------------------------------------------------------------
[Scrimba](https://scrimba.com) is a code-learning platform that allows you to interactively edit and run code while watching a video walkthrough.
We've partnered with Scrimba on course materials (called "scrims") that teach the fundamentals of building with LangChain.js - check them out below, and check back for more as they become available!
### Learn LangChain.js[](#learn-langchainjs "Direct link to Learn LangChain.js")
* [Learn LangChain.js on Scrimba](https://scrimba.com/learn/langchain)
An full end-to-end course that walks through how to build a chatbot that can answer questions about a provided document. A great introduction to LangChain and a great first project for learning how to use LangChain Expression Language primitives to perform retrieval!
### LangChain Expression Language (LCEL)[](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
* [The basics (PromptTemplate + LLM)](https://scrimba.com/scrim/c6rD6Nt9)
* [Adding an output parser](https://scrimba.com/scrim/co6ae44248eacc1abd87ae3dc)
* [Attaching function calls to a model](https://scrimba.com/scrim/cof5449f5bc972f8c90be6a82)
* [Composing multiple chains](https://scrimba.com/scrim/co14344c29595bfb29c41f12a)
* [Retrieval chains](https://scrimba.com/scrim/co0e040d09941b4000244db46)
* [Conversational retrieval chains ("Chat with Docs")](https://scrimba.com/scrim/co3ed4a9eb4c6c6d0361a507c)
### Deeper dives[](#deeper-dives "Direct link to Deeper dives")
* [Setting up a new `PromptTemplate`](https://scrimba.com/scrim/cbGwRwuV)
* [Setting up `ChatOpenAI` parameters](https://scrimba.com/scrim/cEgbBBUw)
* [Attaching stop sequences](https://scrimba.com/scrim/co9704e389428fe2193eb955c)
Neo4j GraphAcademy[](#neo4j-graphacademy "Direct link to Neo4j GraphAcademy")
------------------------------------------------------------------------------
[Neo4j](https://neo4j.com) has put together a hands-on, practical course that shows how to build a movie-recommending chatbot in Next.js. It covers retrieval-augmented generation (RAG), tracking history, and more. Check it out below:
* [Build a Neo4j-backed Chatbot with TypeScript](https://graphacademy.neo4j.com/courses/llm-chatbot-typescript/?ref=langchainjs)
* * *
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
* [Deeplearning.ai](#deeplearningai)
* [Scrimba interactive guides](#scrimba-interactive-guides)
* [Learn LangChain.js](#learn-langchainjs)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Deeper dives](#deeper-dives)
* [Neo4j GraphAcademy](#neo4j-graphacademy)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Tutorials
On this page
Tutorials
=========
New to LangChain or to LLM app development in general? Read this material to quickly get up and running.
### Basics[](#basics "Direct link to Basics")
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
### Working with external knowledge[](#working-with-external-knowledge "Direct link to Working with external knowledge")
* [Build a Retrieval Augmented Generation (RAG) Application](/v0.2/docs/tutorials/rag)
* [Build a Conversational RAG Application](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a local RAG application](/v0.2/docs/tutorials/local_rag)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
### Specialized tasks[](#specialized-tasks "Direct link to Specialized tasks")
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Classify text into labels](/v0.2/docs/tutorials/classification)
* [Summarize text](/v0.2/docs/tutorials/summarization)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Introduction
](/v0.2/docs/introduction)[
Next
Build a Question Answering application over a Graph Database
](/v0.2/docs/tutorials/graph)
* [Basics](#basics)
* [Working with external knowledge](#working-with-external-knowledge)
* [Specialized tasks](#specialized-tasks)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/graph | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Question Answering application over a Graph Database
On this page
Build a Question Answering application over a Graph Database
============================================================
In this guide we’ll go over the basic ways to create a Q&A chain over a graph database. These systems will allow us to ask a question about the data in a graph database and get back a natural language answer.
⚠️ Security note ⚠️[](#security-note "Direct link to ⚠️ Security note ⚠️")
---------------------------------------------------------------------------
Building Q&A systems of graph databases requires executing model-generated graph queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent’s needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, [see here](/v0.2/docs/security).
Architecture[](#architecture "Direct link to Architecture")
------------------------------------------------------------
At a high-level, the steps of most graph chains are:
1. **Convert question to a graph database query**: Model converts user input to a graph database query (e.g. Cypher).
2. **Execute graph database query**: Execute the graph database query.
3. **Answer the question**: Model responds to user input using the query results.
![sql_usecase.png](/v0.2/assets/images/graph_usecase-34d891523e6284bb6230b38c5f8392e5.png)
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver
yarn add langchain @langchain/community @langchain/openai neo4j-driver
pnpm add langchain @langchain/community @langchain/openai neo4j-driver
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Graph schema[](#graph-schema "Direct link to Graph schema")
------------------------------------------------------------
In order for an LLM to be able to generate a Cypher statement, it needs information about the graph schema. When you instantiate a graph object, it retrieves the information about the graph schema. If you later make any changes to the graph, you can run the `refreshSchema` method to refresh the schema information.
await graph.refreshSchema();console.log(graph.schema);
Node properties are the following:Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING}, Person {name: STRING}, Genre {name: STRING}Relationship properties are the following:The relationships are the following:(:Movie)-[:IN_GENRE]->(:Genre), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:ACTED_IN]->(:Movie)
Great! We’ve got a graph database that we can query. Now let’s try hooking it up to an LLM.
Chain[](#chain "Direct link to Chain")
---------------------------------------
Let’s use a simple chain that takes a question, turns it into a Cypher query, executes the query, and uses the result to answer the original question.
![graph_chain.webp](/v0.2/assets/images/graph_chain-6379941793e0fa985e51e4bda0329403.webp)
LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](https://python.langchain.com/docs/use_cases/graph/graph_cypher_qa)
import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const chain = GraphCypherQAChain.fromLLM({ llm, graph,});const response = await chain.invoke({ query: "What was the cast of the Casino?",});response;
{ result: "James Woods, Joe Pesci, Robert De Niro, Sharon Stone" }
### Next steps[](#next-steps "Direct link to Next steps")
For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:
* [Prompting strategies](/v0.2/docs/how_to/graph_prompting): Advanced prompt engineering techniques.
* [Mapping values](/v0.2/docs/how_to/graph_mapping/): Techniques for mapping values from questions to database.
* [Semantic layer](/v0.2/docs/how_to/graph_semantic): Techniques for working implementing semantic layers.
* [Constructing graphs](/v0.2/docs/how_to/graph_constructing): Techniques for constructing knowledge graphs.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Tutorials
](/v0.2/docs/tutorials/)[
Next
Tutorials
](/v0.2/docs/tutorials/)
* [⚠️ Security note ⚠️](#security-note)
* [Architecture](#architecture)
* [Setup](#setup)
* [Graph schema](#graph-schema)
* [Chain](#chain)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* Welcome Contributors
On this page
Welcome Contributors
====================
Hi there! Thank you for even being interested in contributing to LangChain. As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
🗺️ Guidelines[](#️-guidelines "Direct link to 🗺️ Guidelines")
----------------------------------------------------------------
### 👩💻 Ways to contribute[](#-ways-to-contribute "Direct link to 👩💻 Ways to contribute")
There are many ways to contribute to LangChain. Here are some common ways people contribute:
* [**Documentation**](/v0.2/docs/contributing/documentation/style_guide): Help improve our docs, including this one!
* [**Code**](/v0.2/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure.
* [**Integrations**](/v0.2/docs/contributing/integrations): Help us integrate with your favorite vendors and tools.
* [**Discussions**](https://github.com/langchain-ai/langchainjs/discussions): Help answer usage questions and discuss issues with users.
### 🚩 GitHub Issues[](#-github-issues "Direct link to 🚩 GitHub Issues")
Our [issues](https://github.com/langchain-ai/langchainjs/issues) page is kept up to date with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature. If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up-to-date as possible, though with the rapid rate of development in this field some may get out of date. If you notice this happening, please let us know.
### 💭 GitHub Discussions[](#-github-discussions "Direct link to 💭 GitHub Discussions")
We have a [discussions](https://github.com/langchain-ai/langchainjs/discussions) page where users can ask usage questions, discuss design decisions, and propose new features.
If you are able to help answer questions, please do so! This will allow the maintainers to spend more time focused on development and bug fixing.
### 🙋 Getting Help[](#-getting-help "Direct link to 🙋 Getting Help")
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase. If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help - we do not want these to get in the way of getting good code into the codebase.
🌟 Recognition
==============
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)! If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Next
Repository Structure
](/v0.2/docs/contributing/repo_structure)
* [🗺️ Guidelines](#️-guidelines)
* [👩💻 Ways to contribute](#-ways-to-contribute)
* [🚩 GitHub Issues](#-github-issues)
* [💭 GitHub Discussions](#-github-discussions)
* [🙋 Getting Help](#-getting-help)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/llm_chain | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Simple LLM Application
On this page
Build a Simple LLM Application
==============================
In this quickstart we’ll show you how to build a simple LLM application. This application will translate text from English into another language. This is a relatively simple LLM application - it’s just a single LLM call plus some prompting. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call!
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
Concepts we will cover are:
* Using [language models](/v0.2/docs/concepts/#chat-models)
* Using [PromptTemplates](/v0.2/docs/concepts/#prompt-templates) and [OutputParsers](/v0.2/docs/concepts/#output-parsers)
* [Chaining](/v0.2/docs/concepts/#langchain-expression-language) a PromptTemplate + LLM + OutputParser using LangChain
* Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
That’s a fair amount to cover! Let’s dive in.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Installation[](#installation "Direct link to Installation")
To install LangChain run:
* npm
* yarn
* pnpm
npm i langchain
yarn add langchain
pnpm add langchain
For more details, see our [Installation guide](/v0.2/docs/how_to/installation/).
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Detailed walkthrough[](#detailed-walkthrough "Direct link to Detailed walkthrough")
------------------------------------------------------------------------------------
In this guide we will build an application to translate user input from one language to another.
Using Language Models[](#using-language-models "Direct link to Using Language Models")
---------------------------------------------------------------------------------------
First up, let’s learn how to use a language model by itself. LangChain supports many different language models that you can use interchangably - select the one you want to use below!
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI(model: "gpt-4");
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
Let’s first use the model directly. `ChatModel`s are instances of LangChain “Runnables”, which means they expose a standard interface for interacting with them. To just simply call the model, we can pass in a list of messages to the `.invoke` method.
import { HumanMessage, SystemMessage } from "@langchain/core/messages";const messages = [ new SystemMessage("Translate the following from English into Italian"), new HumanMessage("hi!"),];await model.invoke(messages);
AIMessage { lc_serializable: true, lc_kwargs: { content: "ciao!", tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "ciao!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 3, promptTokens: 20, totalTokens: 23 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
If we’ve enable LangSmith, we can see that this run is logged to LangSmith, and can see the [LangSmith trace](https://smith.langchain.com/public/45f1a650-38fb-41e1-9b61-becc0684f2ce/r)
OutputParsers[](#outputparsers "Direct link to OutputParsers")
---------------------------------------------------------------
Notice that the response from the model is an `AIMessage`. This contains a string response along with other metadata about the response. Oftentimes we may just want to work with the string response. We can parse out just this response by using a simple output parser.
We first import the simple output parser.
import { StringOutputParser } from "@langchain/core/output_parsers";const parser = new StringOutputParser();
One way to use it is to use it by itself. For example, we could save the result of the language model call and then pass it to the parser.
const result = await model.invoke(messages);
await parser.invoke(result);
"ciao!"
More commonly, we can “chain” the model with this output parser. This means this output parser will get called every time in this chain. This chain takes on the input type of the language model (string or list of message) and returns the output type of the output parser (string).
We can easily create the chain using the `.pipe` method. The `.pipe` method is used in LangChain to combine two elements together.
const chain = model.pipe(parser);
await chain.invoke(messages);
"Ciao!"
If we now look at LangSmith, we can see that the chain has two steps: first the language model is called, then the result of that is passed to the output parser. We can see the [LangSmith trace](https://smith.langchain.com/public/05bec1c1-fc51-4b2c-ab3b-4b63709e4462/r)
Prompt Templates[](#prompt-templates "Direct link to Prompt Templates")
------------------------------------------------------------------------
Right now we are passing a list of messages directly into the language model. Where does this list of messages come from? Usually it constructed from a combination of user input and application logic. This application logic usually takes the raw user input and transforms it into a list of messages ready to pass to the language model. Common transformations include adding a system message or formatting a template with the user input.
PromptTemplates are a concept in LangChain designed to assist with this transformation. They take in raw user input and return data (a prompt) that is ready to pass into a language model.
Let’s create a PromptTemplate here. It will take in two user variables:
* `language`: The language to translate text into
* `text`: The text to translate
import { ChatPromptTemplate } from "@langchain/core/prompts";
First, let’s create a string that we will format to be the system message:
const systemTemplate = "Translate the following into {language}:";
Next, we can create the PromptTemplate. This will be a combination of the `systemTemplate` as well as a simpler template for where the put the text
const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", systemTemplate], ["user", "{text}"],]);
The input to this prompt template is a dictionary. We can play around with this prompt template by itself to see what it does by itself
const result = await promptTemplate.invoke({ language: "italian", text: "hi" });result;
ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Translate the following into italian:", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate the following into italian:", name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi", name: undefined, additional_kwargs: {}, response_metadata: {} } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Translate the following into italian:", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate the following into italian:", name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi", name: undefined, additional_kwargs: {}, response_metadata: {} } ]}
We can see that it returns a `ChatPromptValue` that consists of two messages. If we want to access the messages directly we do:
result.toChatMessages();
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Translate the following into italian:", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Translate the following into italian:", name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "hi", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "hi", name: undefined, additional_kwargs: {}, response_metadata: {} }]
We can now combine this with the model and the output parser from above. This will chain all three components together.
const chain = promptTemplate.pipe(model).pipe(parser);
await chain.invoke({ language: "italian", text: "hi" });
"ciao"
If we take a look at the LangSmith trace, we can see all three components show up in the [LangSmith trace](https://smith.langchain.com/public/cef6edcd-39ed-4c1e-86f7-491a1b611aeb/r)
Conclusion[](#conclusion "Direct link to Conclusion")
------------------------------------------------------
That’s it! In this tutorial we’ve walked through creating our first simple LLM application. We’ve learned how to work with language models, how to parse their outputs, how to create a prompt template, and how to get great observability into chains you create with LangSmith.
This just scratches the surface of what you will want to learn to become a proficient AI Engineer. Luckily - we’ve got a lot of other resources!
For more in-depth tutorials, check out out [Tutorials](/v0.2/docs/tutorials) section.
If you have specific questions on how to accomplish particular tasks, see our [How-To Guides](/v0.2/docs/how_to) section.
For reading up on the core concepts of LangChain, we’ve got detailed [Conceptual Guides](/v0.2/docs/concepts)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Tutorials
](/v0.2/docs/tutorials/)[
Next
Build a Query Analysis System
](/v0.2/docs/tutorials/query_analysis)
* [Concepts](#concepts)
* [Setup](#setup)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Detailed walkthrough](#detailed-walkthrough)
* [Using Language Models](#using-language-models)
* [OutputParsers](#outputparsers)
* [Prompt Templates](#prompt-templates)
* [Conclusion](#conclusion)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/query_analysis | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Query Analysis System
On this page
Build a Query Analysis System
=============================
This page will show how to use query analysis in a basic end-to-end example. This will cover creating a simple search engine, showing a failure mode that occurs when passing a raw user question to that search, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques and this end-to-end example will not show all of them.
For the purpose of this example, we will do retrieval over the LangChain YouTube videos.
Setup[](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai youtubei.js chromadb youtube-transcript
yarn add langchain @langchain/community @langchain/openai youtubei.js chromadb youtube-transcript
pnpm add langchain @langchain/community @langchain/openai youtubei.js chromadb youtube-transcript
#### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
We’ll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Load documents[](#load-documents "Direct link to Load documents")
We can use the `YouTubeLoader` to load transcripts of a few LangChain videos:
import { DocumentInterface } from "@langchain/core/documents";import { YoutubeLoader } from "langchain/document_loaders/web/youtube";import { getYear } from "date-fns";const urls = [ "https://www.youtube.com/watch?v=HAn9vnJy6S4", "https://www.youtube.com/watch?v=dA1cHGACXCo", "https://www.youtube.com/watch?v=ZcEMLz27sL4", "https://www.youtube.com/watch?v=hvAPnpSfSGo", "https://www.youtube.com/watch?v=EhlPDL4QrWY", "https://www.youtube.com/watch?v=mmBo8nlu2j0", "https://www.youtube.com/watch?v=rQdibOsL1ps", "https://www.youtube.com/watch?v=28lC4fqukoc", "https://www.youtube.com/watch?v=es-9MgxB-uc", "https://www.youtube.com/watch?v=wLRHwKuKvOE", "https://www.youtube.com/watch?v=ObIltMaRJvY", "https://www.youtube.com/watch?v=DjuXACWYkkU", "https://www.youtube.com/watch?v=o7C9ld6Ln-M",];let docs: Array<DocumentInterface> = [];for await (const url of urls) { const doc = await YoutubeLoader.createFromUrl(url, { language: "en", addVideoInfo: true, }).load(); docs = docs.concat(doc);}console.log(docs.length);/*13 */// Add some additional metadata: what year the video was published// The JS API does not provide publish date, so we can use a// hardcoded array with the dates instead.const dates = [ new Date("Jan 31, 2024"), new Date("Jan 26, 2024"), new Date("Jan 24, 2024"), new Date("Jan 23, 2024"), new Date("Jan 16, 2024"), new Date("Jan 5, 2024"), new Date("Jan 2, 2024"), new Date("Dec 20, 2023"), new Date("Dec 19, 2023"), new Date("Nov 27, 2023"), new Date("Nov 22, 2023"), new Date("Nov 16, 2023"), new Date("Nov 2, 2023"),];docs.forEach((doc, idx) => { // eslint-disable-next-line no-param-reassign doc.metadata.publish_year = getYear(dates[idx]); // eslint-disable-next-line no-param-reassign doc.metadata.publish_date = dates[idx];});// Here are the titles of the videos we've loaded:console.log(docs.map((doc) => doc.metadata.title));/*[ 'OpenGPTs', 'Building a web RAG chatbot: using LangChain, Exa (prev. Metaphor), LangSmith, and Hosted Langserve', 'Streaming Events: Introducing a new `stream_events` method', 'LangGraph: Multi-Agent Workflows', 'Build and Deploy a RAG app with Pinecone Serverless', 'Auto-Prompt Builder (with Hosted LangServe)', 'Build a Full Stack RAG App With TypeScript', 'Getting Started with Multi-Modal LLMs', 'SQL Research Assistant', 'Skeleton-of-Thought: Building a New Template from Scratch', 'Benchmarking RAG over LangChain Docs', 'Building a Research Assistant from Scratch', 'LangServe and LangChain Templates Webinar'] */
#### API Reference:
* [DocumentInterface](https://v02.api.js.langchain.com/interfaces/langchain_core_documents.DocumentInterface.html) from `@langchain/core/documents`
* [YoutubeLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_youtube.YoutubeLoader.html) from `langchain/document_loaders/web/youtube`
Here’s the metadata associated with each video.
We can see that each document also has a title, view count, publication date, and length:
import { getDocs } from "./docs.js";const docs = await getDocs();console.log(docs[0].metadata);/**{ source: 'HAn9vnJy6S4', description: 'OpenGPTs is an open-source platform aimed at recreating an experience like the GPT Store - but with any model, any tools, and that you can self-host.\n' + '\n' + 'This video covers both how to use it as well as how to build it.\n' + '\n' + 'GitHub: https://github.com/langchain-ai/opengpts', title: 'OpenGPTs', view_count: 7262, author: 'LangChain'} */// And here's a sample from a document's contents:console.log(docs[0].pageContent.slice(0, 500));/*hello today I want to talk about open gpts open gpts is a project that we built here at linkchain uh that replicates the GPT store in a few ways so it creates uh end user-facing friendly interface to create different Bots and these Bots can have access to different tools and they can uh be given files to retrieve things over and basically it's a way to create a variety of bots and expose the configuration of these Bots to end users it's all open source um it can be used with open AI it can be us */
#### API Reference:
### Indexing documents[](#indexing-documents "Direct link to Indexing documents")
Whenever we perform retrieval we need to create an index of documents that we can query. We’ll use a vector store to index our documents, and we’ll chunk them first to make our retrievals more concise and precise:
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { OpenAIEmbeddings } from "@langchain/openai";import { Chroma } from "@langchain/community/vectorstores/chroma";import { getDocs } from "./docs.js";const docs = await getDocs();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 2000 });const chunkedDocs = await textSplitter.splitDocuments(docs);const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small",});const vectorStore = await Chroma.fromDocuments(chunkedDocs, embeddings, { collectionName: "yt-videos",});
#### API Reference:
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
Then later, you can retrieve the index without having to re-query and embed:
import "chromadb";import { OpenAIEmbeddings } from "@langchain/openai";import { Chroma } from "@langchain/community/vectorstores/chroma";const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small",});const vectorStore = await Chroma.fromExistingCollection(embeddings, { collectionName: "yt-videos",});
[Module: null prototype] { AdminClient: [class AdminClient], ChromaClient: [class ChromaClient], CloudClient: [class CloudClient extends ChromaClient], CohereEmbeddingFunction: [class CohereEmbeddingFunction], Collection: [class Collection], DefaultEmbeddingFunction: [class _DefaultEmbeddingFunction], GoogleGenerativeAiEmbeddingFunction: [class _GoogleGenerativeAiEmbeddingFunction], HuggingFaceEmbeddingServerFunction: [class HuggingFaceEmbeddingServerFunction], IncludeEnum: { Documents: "documents", Embeddings: "embeddings", Metadatas: "metadatas", Distances: "distances" }, JinaEmbeddingFunction: [class JinaEmbeddingFunction], OpenAIEmbeddingFunction: [class _OpenAIEmbeddingFunction], TransformersEmbeddingFunction: [class _TransformersEmbeddingFunction]}
Retrieval without query analysis[](#retrieval-without-query-analysis "Direct link to Retrieval without query analysis")
------------------------------------------------------------------------------------------------------------------------
We can perform similarity search on a user question directly to find chunks relevant to the question:
const searchResults = await vectorStore.similaritySearch( "how do I build a RAG agent");console.log(searchResults[0].metadata.title);console.log(searchResults[0].pageContent.slice(0, 500));
OpenGPTshardcoded that it will always do a retrieval step here the assistant decides whether to do a retrieval step or not sometimes this is good sometimes this is bad sometimes it you don't need to do a retrieval step when I said hi it didn't need to call it tool um but other times you know the the llm might mess up and not realize that it needs to do a retrieval step and so the rag bot will always do a retrieval step so it's more focused there because this is also a simpler architecture so it's always
This works pretty okay! Our first result is somewhat relevant to the question.
What if we wanted to search for results from a specific time period?
const searchResults = await vectorStore.similaritySearch( "videos on RAG published in 2023");console.log(searchResults[0].metadata.title);console.log(searchResults[0].metadata.publish_year);console.log(searchResults[0].pageContent.slice(0, 500));
OpenGPTs2024hardcoded that it will always do a retrieval step here the assistant decides whether to do a retrieval step or not sometimes this is good sometimes this is bad sometimes it you don't need to do a retrieval step when I said hi it didn't need to call it tool um but other times you know the the llm might mess up and not realize that it needs to do a retrieval step and so the rag bot will always do a retrieval step so it's more focused there because this is also a simpler architecture so it's always
Our first result is from 2024, and not very relevant to the input. Since we’re just searching against document contents, there’s no way for the results to be filtered on any document attributes.
This is just one failure mode that can arise. Let’s now take a look at how a basic form of query analysis can fix it!
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
To handle these failure modes we’ll do some query structuring. This will involve defining a **query schema** that contains some date filters and use a function-calling model to convert a user question into a structured queries.
### Query schema[](#query-schema "Direct link to Query schema")
In this case we’ll have explicit min and max attributes for publication date so that it can be filtered on.
import { z } from "zod";const searchSchema = z .object({ query: z .string() .describe("Similarity search query applied to video transcripts."), publish_year: z.number().optional().describe("Year of video publication."), }) .describe( "Search over a database of tutorial videos about a software library." );
### Query generation[](#query-generation "Direct link to Query generation")
To convert user questions to structured queries we’ll make use of OpenAI’s function-calling API. Specifically we’ll use the new [ChatModel.withStructuredOutput()](https://v02.api.js.langchain.com/classes/langchain_core_language_models_base.BaseLanguageModel.html#withStructuredOutput) constructor to handle passing the schema to the model and parsing the output.
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Given a question, return a list of database queries optimized to retrieve the most relevant results.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0,});const structuredLLM = llm.withStructuredOutput(searchSchema, { name: "search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, structuredLLM,]);
Let’s see what queries our analyzer generates for the questions we searched earlier:
console.log(await queryAnalyzer.invoke("How do I build a rag agent"));
{ query: "build a rag agent" }
console.log(await queryAnalyzer.invoke("videos on RAG published in 2023"));
{ query: "RAG", publish_year: 2023 }
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
Our query analysis looks pretty good; now let’s try using our generated queries to actually perform retrieval.
**Note:** in our example, we specified `tool_choice: "Search"`. This will force the LLM to call one - and only one - function, meaning that we will always have one optimized query to look up. Note that this is not always the case - see other guides for how to deal with situations when no - or multiple - optimized queries are returned.
import { DocumentInterface } from "@langchain/core/documents";const retrieval = async (input: { query: string; publish_year?: number;}): Promise<DocumentInterface[]> => { let _filter: Record<string, any> = {}; if (input.publish_year) { // This syntax is specific to Chroma // the vector database we are using. _filter = { publish_year: { $eq: input.publish_year, }, }; } return vectorStore.similaritySearch(input.query, undefined, _filter);};
import { RunnableLambda } from "@langchain/core/runnables";const retrievalChain = queryAnalyzer.pipe( new RunnableLambda({ func: async (input) => retrieval(input as unknown as { query: string; publish_year?: number }), }));
We can now run this chain on the problematic input from before, and see that it yields only results from that year!
const results = await retrievalChain.invoke("RAG tutorial published in 2023");
console.log( results.map((doc) => ({ title: doc.metadata.title, year: doc.metadata.publish_date, })));
[ { title: "Getting Started with Multi-Modal LLMs", year: "2023-12-20T08:00:00.000Z" }, { title: "LangServe and LangChain Templates Webinar", year: "2023-11-02T07:00:00.000Z" }, { title: "Getting Started with Multi-Modal LLMs", year: "2023-12-20T08:00:00.000Z" }, { title: "Building a Research Assistant from Scratch", year: "2023-11-16T08:00:00.000Z" }]
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build a Simple LLM Application
](/v0.2/docs/tutorials/llm_chain)[
Next
Build a Chatbot
](/v0.2/docs/tutorials/chatbot)
* [Setup](#setup)
* [Load documents](#load-documents)
* [Indexing documents](#indexing-documents)
* [Retrieval without query analysis](#retrieval-without-query-analysis)
* [Query analysis](#query-analysis)
* [Query schema](#query-schema)
* [Query generation](#query-generation)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/extraction | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build an Extraction Chain
On this page
Build an Extraction Chain
=========================
In this tutorial, we will build a chain to extract structured information from unstructured text.
info
This tutorial will only work with models that support **function/tool calling**
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
Concepts we will cover are: - Using [language models](/v0.2/docs/concepts/#chat-models) - Using [function/tool calling](/v0.2/docs/concepts/#function-tool-calling) - Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Installation[](#installation "Direct link to Installation")
To install LangChain run:
* npm
* yarn
* pnpm
npm i langchain
yarn add langchain
pnpm add langchain
For more details, see our [Installation guide](/v0.2/docs/how_to/installation/).
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
The Schema[](#the-schema "Direct link to The Schema")
------------------------------------------------------
First, we need to describe what information we want to extract from the text.
We’ll use Zod to define an example schema to extract personal information.
* npm
* yarn
* pnpm
npm i zod @langchain/core
yarn add zod @langchain/core
pnpm add zod @langchain/core
import { z } from "zod";const personSchema = z.object({ name: z.string().nullish().describe("The name of the person"), hair_color: z .string() .nullish() .describe("The color of the person's hair if known"), height_in_meters: z.string().nullish().describe("Height measured in meters"),});
There are two best practices when defining schema:
1. Document the **attributes** and the **schema** itself: This information is sent to the LLM and is used to improve the quality of information extraction.
2. Do not force the LLM to make up information! Above we used `.nullish()` for the attributes allowing the LLM to output `null` or `undefined` if it doesn’t know the answer.
info
For best performance, document the schema well and make sure the model isn’t force to return results if there’s no information to be extracted in the text.
The Extractor[](#the-extractor "Direct link to The Extractor")
---------------------------------------------------------------
Let’s create an information extractor using the schema we defined above.
import { ChatPromptTemplate } from "@langchain/core/prompts";// import { MessagesPlaceholder } from "@langchain/core/messages";// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract,return null for the attribute's value.`, ], // Please see the how-to about improving performance with // reference examples. // new MessagesPlaceholder("examples"), ["human", "{text}"],]);
We need to use a model that supports function/tool calling.
Please review [the documentation](/v0.2/docs/concepts#function-tool-calling) for list of some models that can be used with this API.
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});const runnable = prompt.pipe(llm.withStructuredOutput(personSchema));
Let’s test it out
const text = "Alan Smith is 6 feet tall and has blond hair.";await runnable.invoke({ text });
{ name: "Alan Smith", hair_color: "blond", height_in_meters: "1.83" }
info
Extraction is Generative 🤯
LLMs are generative models, so they can do some pretty cool things like correctly extract the height of the person in meters even though it was provided in feet!
We can see the LangSmith trace [here](https://smith.langchain.com/public/3d44b7e8-e7ca-4e02-951d-3290ccc89d64/r)
Multiple Entities[](#multiple-entities "Direct link to Multiple Entities")
---------------------------------------------------------------------------
In **most cases**, you should be extracting a list of entities rather than a single entity.
This can be easily achieved using pydantic by nesting models inside one another.
import { z } from "zod";const personSchema = z.object({ name: z.string().nullish().describe("The name of the person"), hair_color: z .string() .nullish() .describe("The color of the person's hair if known"), height_in_meters: z.number().nullish().describe("Height measured in meters"),});const dataSchema = z.object({ people: z.array(personSchema).describe("Extracted data about people"),});
info
Extraction might not be perfect here. Please continue to see how to use **Reference Examples** to improve the quality of extraction, and see the **guidelines** section!
const runnable = prompt.pipe(llm.withStructuredOutput(dataSchema));const text = "My name is Jeff, my hair is black and i am 6 feet tall. Anna has the same color hair as me.";await runnable.invoke({ text });
{ people: [ { name: "Jeff", hair_color: "black", height_in_meters: 1.83 }, { name: "Anna", hair_color: "black", height_in_meters: null } ]}
tip
When the schema accommodates the extraction of **multiple entities**, it also allows the model to extract **no entities** if no relevant information is in the text by providing an empty list.
This is usually a **good** thing! It allows specifying **required** attributes on an entity without necessarily forcing the model to detect this entity.
We can see the LangSmith trace [here](https://smith.langchain.com/public/272096ab-9ac5-43f9-aa00-3b8443477d17/r)
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now that you understand the basics of extraction with LangChain, you’re ready to proceed to the rest of the how-to guides:
* [Add Examples](/v0.2/docs/how_to/extraction_examples): Learn how to use **reference examples** to improve performance.
* [Handle Long Text](/v0.2/docs/how_to/extraction_long_text): What should you do if the text does not fit into the context window of the LLM?
* [Use a Parsing Approach](/v0.2/docs/how_to/extraction_parse): Use a prompt based approach to extract with models that do not support **tool/function calling**.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build an Agent
](/v0.2/docs/tutorials/agents)[
Next
Summarize Text
](/v0.2/docs/tutorials/summarization)
* [Concepts](#concepts)
* [Setup](#setup)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [The Schema](#the-schema)
* [The Extractor](#the-extractor)
* [Multiple Entities](#multiple-entities)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/chatbot | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Chatbot
On this page
Build a Chatbot
===============
Overview[](#overview "Direct link to Overview")
------------------------------------------------
We'll go over an example of how to design and implement an LLM-powered chatbot. This chatbot will be able to have a conversation and remember previous interactions.
Note that this chatbot that we build will only use the language model to have a conversation. There are several other related concepts that you may be looking for:
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history/): Enable a chatbot experience over an external source of data
* [Agents](/v0.2/docs/tutorials/agents): Build a chatbot that can take actions
This tutorial will cover the basics which will be helpful for those two more advanced topics, but feel free to skip directly to there should you choose.
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
Here are a few of the high-level components we'll be working with:
* [`Chat Models`](/v0.2/docs/concepts/#chat-models). The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs.
* [`Prompt Templates`](/v0.2/docs/concepts/#prompt-templates), which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.
* [`Chat History`](/v0.2/docs/concepts/#chat-history), which allows a chatbot to "remember" past interactions and take them into account when responding to followup questions.
* Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
We'll cover how to fit the above components together to create a powerful conversational chatbot.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Jupyter Notebook[](#jupyter-notebook "Direct link to Jupyter Notebook")
This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.
This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.
### Installation[](#installation "Direct link to Installation")
To install LangChain run:
* npm
* Yarn
* pnpm
npm i langchain
yarn add langchain
pnpm add langchain
For more details, see our [Installation guide](/v0.2/docs/how_to/installation/).
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Or, if in a notebook, you can set them with:
Quickstart[](#quickstart "Direct link to Quickstart")
------------------------------------------------------
First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build a Query Analysis System
](/v0.2/docs/tutorials/query_analysis)[
Next
Build an Agent
](/v0.2/docs/tutorials/agents)
* [Overview](#overview)
* [Concepts](#concepts)
* [Setup](#setup)
* [Jupyter Notebook](#jupyter-notebook)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Quickstart](#quickstart)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/agents | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build an Agent
On this page
Build an Agent
==============
By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
Concepts we will cover are:
* Using [language models](/v0.2/docs/concepts/#chat-models), in particular their tool calling ability
* Creating a [Retriever](/v0.2/docs/concepts/#retrievers) to expose specific information to our agent
* Using a Search [Tool](/v0.2/docs/concepts/#tools) to look up things online
* Using [LangGraph Agents](/v0.2/docs/concepts/#agents) which use an LLM to think about what to do and then execute upon that
* Debugging and tracing your application using [LangSmith](/v0.2/docs/concepts/#langsmith)
Setup: LangSmith[](#setup-langsmith "Direct link to Setup: LangSmith")
-----------------------------------------------------------------------
By definition, agents take a self-determined, input-dependent sequence of steps before returning a user-facing output. This makes debugging these systems particularly tricky, and observability particularly important. [LangSmith](https://smith.langchain.com) is especially useful for such cases.
When building with LangChain, all steps will automatically be traced in LangSmith. To set up LangSmith we just need set the following environment variables:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="<your-api-key>"
Define tools[](#define-tools "Direct link to Define tools")
------------------------------------------------------------
We first need to create the tools we want to use. We will use two tools: [Tavily](https://app.tavily.com) (to search online) and then a retriever over a local index we will create.
### [Tavily](https://app.tavily.com)[](#tavily "Direct link to tavily")
We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires a Tavily API key set as an environment variable named `TAVILY_API_KEY` - they have a free tier, but if you don’t have one or don’t want to create one, you can always ignore this step.
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const searchTool = new TavilySearchResults();const toolResult = await searchTool.invoke("what is the weather in SF?");console.log(toolResult);/* [{"title":"Weather in December 2023 in San Francisco, California, USA","url":"https://www.timeanddate.com/weather/@5391959/historic?month=12&year=2023","content":"Currently: 52 °F. Broken clouds. (Weather station: San Francisco International Airport, USA). See more current weather Select month: December 2023 Weather in San Francisco — Graph °F Sun, Dec 17 Lo:55 6 pm Hi:57 4 Mon, Dec 18 Lo:54 12 am Hi:55 7 Lo:54 6 am Hi:55 10 Lo:57 12 pm Hi:64 9 Lo:63 6 pm Hi:64 14 Tue, Dec 19 Lo:61","score":0.96006},...]*/
### Retriever[](#retriever "Direct link to Retriever")
We will also create a retriever over some data of our own. For a deeper explanation of each step here, see our [how to guides](/v0.2/docs/how_to/).
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const rawDocs = await loader.load();const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const docs = await splitter.splitDocuments(rawDocs);const vectorstore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const retrieverResult = await retriever.getRelevantDocuments( "how to upload a dataset");console.log(retrieverResult[0]);/* Document { pageContent: "your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.ProductionClosely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production. However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Monitoring and A/B TestingLangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to", metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: { lines: [Object] } } }*/
Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it):
import { createRetrieverTool } from "langchain/tools/retriever";const retrieverTool = createRetrieverTool(retriever, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",});
### Tools[](#tools "Direct link to Tools")
Now that we have created both, we can create a list of tools that we will use downstream:
const tools = [searchTool, retrieverTool];
Create the agent[](#create-the-agent "Direct link to Create the agent")
------------------------------------------------------------------------
Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](https://js.langchain.com/v0.1/docs/modules/agents/agent_types/).
First, we choose the LLM we want to be guiding the agent.
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});
Next, we choose the prompt we want to use to guide the agent:
import type { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");
Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to thing about these components, see our [conceptual guide](/v0.2/docs/concepts#agents).
import { createOpenAIFunctionsAgent } from "langchain/agents";const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,});
Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to thing about these components, see our [conceptual guide](/v0.2/docs/concepts#agents).
import { AgentExecutor } from "langchain/agents";const agentExecutor = new AgentExecutor({ agent, tools,});
Run the agent[](#run-the-agent "Direct link to Run the agent")
---------------------------------------------------------------
We can now run the agent on a few queries! Note that for now, these are all stateless queries (it won’t remember previous interactions).
const result1 = await agentExecutor.invoke({ input: "hi!",});console.log(result1);/* [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "hi!" } [chain/end] [1:chain:AgentExecutor] [1.36s] Exiting Chain run with output: { "output": "Hello! How can I assist you today?" } { input: 'hi!', output: 'Hello! How can I assist you today?' }*/
const result2 = await agentExecutor.invoke({ input: "how can langsmith help with testing?",});console.log(result2);/* [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "how can langsmith help with testing?" } [chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 7:parser:OpenAIFunctionsAgentOutputParser] [66ms] Exiting Chain run with output: { "tool": "langsmith_search", "toolInput": { "query": "how can LangSmith help with testing?" }, "log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "langsmith_search", "arguments": "{\"query\":\"how can LangSmith help with testing?\"}" } } } } ] } [tool/start] [1:chain:AgentExecutor > 8:tool:langsmith_search] Entering Tool run with input: "{"query":"how can LangSmith help with testing?"}" [retriever/start] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] Entering Retriever run with input: { "query": "how can LangSmith help with testing?" } [retriever/end] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] [294ms] Exiting Retriever run with output: { "documents": [ { "pageContent": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 11, "to": 11 } } } }, { "pageContent": "the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebuggingDebugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 3, "to": 3 } } } }, { "pageContent": "inputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 4, "to": 7 } } } }, { "pageContent": "feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasetsLangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime", "metadata": { "source": "https://docs.smith.langchain.com/user_guide", "loc": { "lines": { "from": 11, "to": 11 } } } } ] } [chain/start] [1:chain:AgentExecutor > 10:chain:RunnableAgent] Entering Chain run with input: { "input": "how can langsmith help with testing?", "steps": [ { "action": { "tool": "langsmith_search", "toolInput": { "query": "how can LangSmith help with testing?" }, "log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n", "messageLog": [ { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "langsmith_search", "arguments": "{\"query\":\"how can LangSmith help with testing?\"}" } } } } ] }, "observation": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the\n\nthe time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebuggingDebugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string\n\ninputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies\n\nfeedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasetsLangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime" } ] } [chain/end] [1:chain:AgentExecutor] [5.83s] Exiting Chain run with output: { "input": "how can langsmith help with testing?", "output": "LangSmith can help with testing in several ways:\n\n1. Debugging: LangSmith can be used to debug unexpected end results, agent loops, slow chains, and token usage. It helps in pinpointing underperforming data points and tracking performance over time.\n\n2. Monitoring: LangSmith can monitor applications by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for associating feedback programmatically with runs, which can be used to track performance over time.\n\n3. Exporting Datasets: LangSmith makes it easy to curate datasets, which can be exported for use in other contexts such as OpenAI Evals or fine-tuning with FireworksAI.\n\nOverall, LangSmith simplifies the process of testing changes, constructing datasets, and extracting insights from logged runs, making it a valuable tool for testing and evaluation." } { input: 'how can langsmith help with testing?', output: 'LangSmith can help with testing in several ways:\n' + '\n' + '1. Initial Test Set: LangSmith allows developers to create datasets of inputs and reference outputs to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces.\n' + '\n' + "2. Comparison View: When making changes to your applications, LangSmith provides a comparison view to see whether you've regressed with respect to your initial test cases. This is helpful for evaluating changes in prompts, retrieval strategies, or model choices.\n" + '\n' + '3. Monitoring and A/B Testing: LangSmith provides monitoring charts to track key metrics over time and allows for A/B testing changes in prompt, model, or retrieval strategy.\n' + '\n' + '4. Debugging: LangSmith offers tracing and debugging information at each step of an LLM sequence, making it easier to identify and root-cause issues when things go wrong.\n' + '\n' + '5. Beta Testing and Production: LangSmith enables the addition of runs as examples to datasets, expanding test coverage on real-world scenarios. It also provides monitoring for application performance with respect to latency, cost, and feedback scores at the production stage.\n' + '\n' + 'Overall, LangSmith provides comprehensive testing and monitoring capabilities for LLM applications.' }*/
Adding in memory[](#adding-in-memory "Direct link to Adding in memory")
------------------------------------------------------------------------
As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`.
**Note:** the input variable below needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name.
const result3 = await agentExecutor.invoke({ input: "hi! my name is cob.", chat_history: [],});console.log(result3);/* { input: 'hi! my name is cob.', chat_history: [], output: "Hello Cob! It's nice to meet you. How can I assist you today?" }*/
import { HumanMessage, AIMessage } from "@langchain/core/messages";const result4 = await agentExecutor.invoke({ input: "what's my name?", chat_history: [ new HumanMessage("hi! my name is cob."), new AIMessage("Hello Cob! How can I assist you today?"), ],});console.log(result4);/* { input: "what's my name?", chat_history: [ HumanMessage { content: 'hi! my name is cob.', additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/
If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/v0.2/docs/how_to/message_history/).
import { ChatMessageHistory } from "langchain/stores/message/in_memory";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const messageHistory = new ChatMessageHistory();const agentWithChatHistory = new RunnableWithMessageHistory({ runnable: agentExecutor, // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. getMessageHistory: (_sessionId) => messageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});const result5 = await agentWithChatHistory.invoke( { input: "hi! i'm cob", }, { // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. configurable: { sessionId: "foo", }, });console.log(result5);/* { input: "hi! i'm cob", chat_history: [ HumanMessage { content: "hi! i'm cob", additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} } ], output: 'Hello Cob! How can I assist you today?' }*/
const result6 = await agentWithChatHistory.invoke( { input: "what's my name?", }, { // This is needed because in most real world scenarios, a session id is needed per user. // It isn't really used here because we are using a simple in memory ChatMessageHistory. configurable: { sessionId: "foo", }, });console.log(result6);/* { input: "what's my name?", chat_history: [ HumanMessage { content: "hi! i'm cob", additional_kwargs: {} }, AIMessage { content: 'Hello Cob! How can I assist you today?', additional_kwargs: {} }, HumanMessage { content: "what's my name?", additional_kwargs: {} }, AIMessage { content: 'Your name is Cob. How can I assist you today, Cob?', additional_kwargs: {} } ], output: 'Your name is Cob. How can I assist you today, Cob?' }*/
Conclusion[](#conclusion "Direct link to Conclusion")
------------------------------------------------------
That’s a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there’s lot to learn! Head back to the [main agent page](/v0.2/docs/how_to/agent_executor/) to find more resources on conceptual guides, different types of agents, how to create custom tools, and more!
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build a Chatbot
](/v0.2/docs/tutorials/chatbot)[
Next
Build an Extraction Chain
](/v0.2/docs/tutorials/extraction)
* [Concepts](#concepts)
* [Setup: LangSmith](#setup-langsmith)
* [Define tools](#define-tools)
* [Tavily](#tavily)
* [Retriever](#retriever)
* [Tools](#tools)
* [Create the agent](#create-the-agent)
* [Run the agent](#run-the-agent)
* [Adding in memory](#adding-in-memory)
* [Conclusion](#conclusion)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/classification | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Tagging
On this page
Classify Text into Labels
=========================
Tagging means labeling a document with classes such as:
* sentiment
* language
* style (formal, informal etc.)
* covered topics
* political tendency
![Image description](/v0.2/assets/images/tagging-93990e95451d92b715c2b47066384224.png)
Overview[](#overview "Direct link to Overview")
------------------------------------------------
Tagging has a few components:
* `function`: Like [extraction](/v0.2/docs/tutorials/extraction), tagging uses [functions](https://openai.com/blog/function-calling-and-other-api-updates) to specify how the model should tag a document
* `schema`: defines how we want to tag the document
Quickstart[](#quickstart "Direct link to Quickstart")
------------------------------------------------------
Let’s see a very straightforward example of how we can use OpenAI tool calling for tagging in LangChain. We’ll use the `.withStructuredOutput()` method supported by OpenAI models:
* npm
* yarn
* pnpm
npm i langchain @langchain/openai @langchain/core zod
yarn add langchain @langchain/openai @langchain/core zod
pnpm add langchain @langchain/openai @langchain/core zod
Let’s specify a Pydantic model with a few properties and their expected type in our schema.
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { z } from "zod";const taggingPrompt = ChatPromptTemplate.fromTemplate( `Extract the desired information from the following passage.Only extract the properties mentioned in the 'Classification' function.Passage:{input}`);const classificationSchema = z.object({ sentiment: z.string().describe("The sentiment of the text"), aggressiveness: z .number() .int() .min(1) .max(10) .describe("How aggressive the text is on a scale from 1 to 10"), language: z.string().describe("The language the text is written in"),});// LLMconst llm = new ChatOpenAI({ temperature: 0, model: "gpt-3.5-turbo-0125",});const llmWihStructuredOutput = llm.withStructuredOutput(classificationSchema);const taggingChain = taggingPrompt.pipe(llmWihStructuredOutput);
const input = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!";await taggingChain.invoke({ input });
{ sentiment: "positive", aggressiveness: 1, language: "Spanish" }
As we can see in the example, it correctly interprets what we want.
The results vary so that we may get, for example, sentiments in different languages (‘positive’, ‘enojado’ etc.).
We will see how to control these results in the next section.
Finer control[](#finer-control "Direct link to Finer control")
---------------------------------------------------------------
Careful schema definition gives us more control over the model’s output.
Specifically, we can define:
* possible values for each property
* description to make sure that the model understands the property
* required properties to be returned
Let’s redeclare our Pydantic model to control for each of the previously mentioned aspects using enums:
import { z } from "zod";const classificationSchema = z.object({ sentiment: z .enum(["happy", "neutral", "sad"]) .describe("The sentiment of the text"), aggressiveness: z .number() .int() .min(1) .max(5) .describe( "describes how aggressive the statement is, the higher the number the more aggressive" ), language: z .enum(["spanish", "english", "french", "german", "italian"]) .describe("The language the text is written in"),});
const taggingPrompt = ChatPromptTemplate.fromTemplate( `Extract the desired information from the following passage.Only extract the properties mentioned in the 'Classification' function.Passage:{input}`);// LLMconst llm = new ChatOpenAI({ temperature: 0, model: "gpt-3.5-turbo-0125",});const llmWihStructuredOutput = llm.withStructuredOutput(classificationSchema);const chain = taggingPrompt.pipe(llmWihStructuredOutput);
Now the answers will be restricted in a way we expect!
const input = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!";await chain.invoke({ input });
{ sentiment: "happy", aggressiveness: 3, language: "spanish" }
const input = "Estoy muy enojado con vos! Te voy a dar tu merecido!";await chain.invoke({ input });
{ sentiment: "sad", aggressiveness: 5, language: "spanish" }
const input = "Weather is ok here, I can go outside without much more than a coat";await chain.invoke({ input });
{ sentiment: "neutral", aggressiveness: 3, language: "english" }
The [LangSmith trace](https://smith.langchain.com/public/455f5404-8784-49ce-8851-0619b99e936f/r) lets us peek under the hood:
### TODO ADD[](#todo-add "Direct link to TODO ADD")
![Image description](/v0.2/assets/images/classification_ls_trace-7b269b067c3751c6d06289c560505656.png)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Summarize Text
](/v0.2/docs/tutorials/summarization)[
Next
Build a Local RAG Application
](/v0.2/docs/tutorials/local_rag)
* [Overview](#overview)
* [Quickstart](#quickstart)
* [Finer control](#finer-control)
* [TODO ADD](#todo-add)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/summarization | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Summarize Text
Summarize Text
==============
A common use case is wanting to summarize long documents. This naturally runs into the context window limitations. Unlike in question-answering, you can't just do some semantic search hacks to only select the chunks of text most relevant to the question (because, in this case, there is no particular question - you want to summarize everything). So what do you do then?
To get started, we would recommend checking out the summarization chain, which attacks this problem in a recursive manner.
* [Summarization Chain](https://js.langchain.com/v0.1/docs/modules/chains/popular/summarize)
Example[](#example "Direct link to Example")
---------------------------------------------
Here's an example of how you can use the [RefineDocumentsChain](https://js.langchain.com/v0.1/docs/modules/chains/document/refine) to summarize documents loaded from a YouTube video:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { loadSummarizationChain } from "langchain/chains";import { SearchApiLoader } from "langchain/document_loaders/web/searchapi";import { TokenTextSplitter } from "@langchain/textsplitters";import { PromptTemplate } from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";const loader = new SearchApiLoader({ engine: "youtube_transcripts", video_id: "WTOm65IZneg",});const docs = await loader.load();const splitter = new TokenTextSplitter({ chunkSize: 10000, chunkOverlap: 250,});const docsSummary = await splitter.splitDocuments(docs);const llmSummary = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0.3,});const summaryTemplate = `You are an expert in summarizing YouTube videos.Your goal is to create a summary of a podcast.Below you find the transcript of a podcast:--------{text}--------The transcript of the podcast will also be used as the basis for a question and answer bot.Provide some examples questions and answers that could be asked about the podcast. Make these questions very specific.Total output will be a summary of the video and a list of example questions the user could ask of the video.SUMMARY AND QUESTIONS:`;const SUMMARY_PROMPT = PromptTemplate.fromTemplate(summaryTemplate);const summaryRefineTemplate = `You are an expert in summarizing YouTube videos.Your goal is to create a summary of a podcast.We have provided an existing summary up to a certain point: {existing_answer}Below you find the transcript of a podcast:--------{text}--------Given the new context, refine the summary and example questions.The transcript of the podcast will also be used as the basis for a question and answer bot.Provide some examples questions and answers that could be asked about the podcast. Makethese questions very specific.If the context isn't useful, return the original summary and questions.Total output will be a summary of the video and a list of example questions the user could ask of the video.SUMMARY AND QUESTIONS:`;const SUMMARY_REFINE_PROMPT = PromptTemplate.fromTemplate( summaryRefineTemplate);const summarizeChain = loadSummarizationChain(llmSummary, { type: "refine", verbose: true, questionPrompt: SUMMARY_PROMPT, refinePrompt: SUMMARY_REFINE_PROMPT,});const summary = await summarizeChain.run(docsSummary);console.log(summary);/* Here is a summary of the key points from the podcast transcript: - Jimmy helps provide hearing aids and cochlear implants to deaf and hard-of-hearing people who can't afford them. He helps over 1,000 people hear again. - Jimmy surprises recipients with $10,000 cash gifts in addition to the hearing aids. He also gifts things like jet skis, basketball game tickets, and trips to concerts. - Jimmy travels internationally to provide hearing aids, visiting places like Mexico, Guatemala, Brazil, South Africa, Malawi, and Indonesia. - Jimmy donates $100,000 to organizations around the world that teach sign language. - The recipients are very emotional and grateful to be able to hear their loved ones again. Here are some example questions and answers about the podcast: Q: How many people did Jimmy help regain their hearing? A: Jimmy helped over 1,000 people regain their hearing. Q: What types of hearing devices did Jimmy provide to the recipients? A: Jimmy provided cutting-edge hearing aids and cochlear implants. Q: In addition to the hearing devices, what surprise gifts did Jimmy give some recipients? A: In addition to hearing devices, Jimmy surprised some recipients with $10,000 cash gifts, jet skis, basketball game tickets, and concert tickets. Q: What countries did Jimmy travel to in order to help people? A: Jimmy traveled to places like Mexico, Guatemala, Brazil, South Africa, Malawi, and Indonesia. Q: How much money did Jimmy donate to organizations that teach sign language? A: Jimmy donated $100,000 to sign language organizations around the world. Q: How did the recipients react when they were able to hear again? A: The recipients were very emotional and grateful, with many crying tears of joy at being able to hear their loved ones again.*/
#### API Reference:
* [loadSummarizationChain](https://v02.api.js.langchain.com/functions/langchain_chains.loadSummarizationChain.html) from `langchain/chains`
* [SearchApiLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_searchapi.SearchApiLoader.html) from `langchain/document_loaders/web/searchapi`
* [TokenTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.TokenTextSplitter.html) from `@langchain/textsplitters`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build an Extraction Chain
](/v0.2/docs/tutorials/extraction)[
Next
Tagging
](/v0.2/docs/tutorials/classification)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/local_rag | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Local RAG Application
On this page
Build a Local RAG Application
=============================
The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [GPT4All](https://github.com/nomic-ai/gpt4all), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the importance of running LLMs locally.
LangChain has integrations with many open-source LLMs that can be run locally.
For example, here we show how to run `OllamaEmbeddings` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.
Document Loading[](#document-loading "Direct link to Document Loading")
------------------------------------------------------------------------
First, install packages needed for local embeddings and vector storage.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We’ll use the following packages:
npm install --save langchain @langchain/community cheerio
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
### Initial setup[](#initial-setup "Direct link to Initial setup")
Load and split an example document.
We’ll use a blog post on agents as an example.
import "cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";
const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const allSplits = await textSplitter.splitDocuments(docs);console.log(allSplits.length);
146
Next, we’ll use `OllamaEmbeddings` for our local embeddings. Follow [these instructions](https://github.com/ollama/ollama) to set up and run a local Ollama instance.
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";import { MemoryVectorStore } from "langchain/vectorstores/memory";const embeddings = new OllamaEmbeddings();const vectorStore = await MemoryVectorStore.fromDocuments( allSplits, embeddings);
Test similarity search is working with our local embeddings.
const question = "What are the approaches to Task Decomposition?";const docs = await vectorStore.similaritySearch(question);console.log(docs.length);
4
Model[](#model "Direct link to Model")
---------------------------------------
### LLaMA2[](#llama2 "Direct link to LLaMA2")
For local LLMs we’ll use also use `ollama`.
import { ChatOllama } from "@langchain/community/chat_models/ollama";const ollamaLlm = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});
const response = await ollamaLlm.invoke( "Simulate a rap battle between Stephen Colbert and John Oliver");console.log(response.content);
[The stage is set for a fierce rap battle between two of the funniest men on television. Stephen Colbert and John Oliver are standing face to face, each with their own microphone and confident smirk on their face.]Stephen Colbert:Yo, John Oliver, I heard you've been talking smackAbout my show and my satire, saying it's all fakeBut let me tell you something, brother, I'm the real dealI've been making fun of politicians for years, with no concealJohn Oliver:Oh, Stephen, you think you're so clever and smartBut your jokes are stale and your delivery's a work of artYou're just a pale imitation of the real deal, Jon StewartI'm the one who's really making waves, while you're just a little birdStephen Colbert:Well, John, I may not be as loud as you, but I'm smarterMy satire is more subtle, and it goes right over their headsI'm the one who's been exposing the truth for yearsWhile you're just a British interloper, trying to steal the cheersJohn Oliver:Oh, Stephen, you may have your fans, but I've got the brainsMy show is more than just slapstick and silly jokes, it's got depth and gainsI'm the one who's really making a difference, while you're just a clownMy satire is more than just a joke, it's a call to action, and I've got the crown[The crowd cheers and chants as the two comedians continue their rap battle.]Stephen Colbert:You may have your fans, John, but I'm the king of satireI've been making fun of politicians for years, and I'm still standing tallMy jokes are clever and smart, while yours are just plain dumbI'm the one who's really in control, and you're just a pretender to the throne.John Oliver:Oh, Stephen, you may have your moment in the sunBut I'm the one who's really shining bright, and my star is just beginning to riseMy satire is more than just a joke, it's a call to action, and I've got the powerI'm the one who's really making a difference, and you're just a fleeting flower.[The crowd continues to cheer and chant as the two comedians continue their rap battle.]
See the LangSmith trace [here](https://smith.langchain.com/public/31c178b5-4bea-4105-88c3-7ec95325c817/r)
Using in a chain[](#using-in-a-chain "Direct link to Using in a chain")
------------------------------------------------------------------------
We can create a summarization chain with either model by passing in the retrieved docs and a simple prompt.
It formats the prompt template using the input key values provided and passes the formatted string to `LLama-V2`, or another specified LLM.
import { RunnableSequence } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const prompt = PromptTemplate.fromTemplate( "Summarize the main themes in these retrieved docs: {context}");const chain = await createStuffDocumentsChain({ llm: ollamaLlm, outputParser: new StringOutputParser(), prompt,});
const question = "What are the approaches to Task Decomposition?";const docs = await vectorStore.similaritySearch(question);await chain.invoke({ context: docs,});
"The main themes retrieved from the provided documents are:\n" + "\n" + "1. Sensory Memory: The ability to retain"... 1117 more characters
See the LangSmith trace [here](https://smith.langchain.com/public/47cf6c2a-3d86-4f2b-9a51-ee4663b19152/r)
Q&A[](#qa "Direct link to Q&A")
--------------------------------
We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific.
Let’s try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt).
import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";const ragPrompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const chain = await createStuffDocumentsChain({ llm: ollamaLlm, outputParser: new StringOutputParser(), prompt: ragPrompt,});
Let’s see what this prompt actually looks like:
console.log( ragPrompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer:
await chain.invoke({ context: docs, question });
"Task decomposition is a crucial step in breaking down complex problems into manageable parts for eff"... 1095 more characters
See the LangSmith trace [here](https://smith.langchain.com/public/dd3a189b-53a1-4f31-9766-244cd04ad1f7/r)
Q&A with retrieval[](#qa-with-retrieval "Direct link to Q&A with retrieval")
-----------------------------------------------------------------------------
Instead of manually passing in docs, we can automatically retrieve them from our vector store based on the user question.
This will use a QA default prompt and will retrieve from the vectorDB.
import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const retriever = vectorStore.asRetriever();const qaChain = RunnableSequence.from([ { context: (input: { question: string }, callbacks) => { const retrieverAndFormatter = retriever.pipe(formatDocumentsAsString); return retrieverAndFormatter.invoke(input.question, callbacks); }, question: new RunnablePassthrough(), }, ragPrompt, ollamaLlm, new StringOutputParser(),]);await qaChain.invoke({ question });
"Based on the context provided, I understand that you are asking me to answer a question related to m"... 948 more characters
See the LangSmith trace [here](https://smith.langchain.com/public/440e65ee-0301-42cf-afc9-f09cfb52cf64/r)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Tagging
](/v0.2/docs/tutorials/classification)[
Next
Conversational RAG
](/v0.2/docs/tutorials/qa_chat_history)
* [Document Loading](#document-loading)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Initial setup](#initial-setup)
* [Model](#model)
* [LLaMA2](#llama2)
* [Using in a chain](#using-in-a-chain)
* [Q&A](#qa)
* [Q&A with retrieval](#qa-with-retrieval)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/qa_chat_history | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Conversational RAG
On this page
Conversational RAG
==================
In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of “memory” of past questions and answers, and some logic for incorporating those into its current thinking.
In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/v0.2/docs/how_to/message_history).
We will cover two approaches:
1. Chains, in which we always execute a retrieval step;
2. Agents, in which we give an LLM discretion over whether and how to execute a retrieval step (or multiple steps).
For the external knowledge source, we will use the same [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng from the [RAG tutorial](/v0.2/docs/tutorials/rag).
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers).
We’ll use the following packages:
npm install --save langchain @langchain/openai cheerio
We need to set environment variable `OPENAI_API_KEY`:
export OPENAI_API_KEY=YOUR_KEY
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
### Initial setup[](#initial-setup "Direct link to Initial setup")
import "cheerio";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChain = await createStuffDocumentsChain({ llm, prompt, outputParser: new StringOutputParser(),});
Let’s see what this prompt actually looks like:
console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer:
await ragChain.invoke({ context: await retriever.invoke("What is Task Decomposition?"), question: "What is Task Decomposition?",});
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 208 more characters
Contextualizing the question[](#contextualizing-the-question "Direct link to Contextualizing the question")
------------------------------------------------------------------------------------------------------------
First we’ll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.
We’ll use a prompt that includes a `MessagesPlaceholder` variable under the name “chat\_history”. This allows us to pass in a list of Messages to the prompt using the “chat\_history” input key, and these messages will be inserted after the system message and before the human message containing the latest question.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const contextualizeQSystemPrompt = `Given a chat history and the latest user questionwhich might reference context in the chat history, formulate a standalone questionwhich can be understood without the chat history. Do NOT answer the question,just reformulate it if needed and otherwise return it as is.`;const contextualizeQPrompt = ChatPromptTemplate.fromMessages([ ["system", contextualizeQSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);const contextualizeQChain = contextualizeQPrompt .pipe(llm) .pipe(new StringOutputParser());
Using this chain we can ask follow-up questions that reference past messages and have them reformulated into standalone questions:
import { AIMessage, HumanMessage } from "@langchain/core/messages";await contextualizeQChain.invoke({ chat_history: [ new HumanMessage("What does LLM stand for?"), new AIMessage("Large language model"), ], question: "What is meant by large",});
'What is the definition of "large" in the context of a language model?'
Chain with chat history[](#chain-with-chat-history "Direct link to Chain with chat history")
---------------------------------------------------------------------------------------------
And now we can build our full QA chain.
Notice we add some routing functionality to only run the “condense question chain” when our chat history isn’t empty. Here we’re taking advantage of the fact that if a function in an LCEL chain returns another chain, that chain will itself be invoked.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const qaSystemPrompt = `You are an assistant for question-answering tasks.Use the following pieces of retrieved context to answer the question.If you don't know the answer, just say that you don't know.Use three sentences maximum and keep the answer concise.{context}`;const qaPrompt = ChatPromptTemplate.fromMessages([ ["system", qaSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);const contextualizedQuestion = (input: Record<string, unknown>) => { if ("chat_history" in input) { return contextualizeQChain; } return input.question;};const ragChain = RunnableSequence.from([ RunnablePassthrough.assign({ context: (input: Record<string, unknown>) => { if ("chat_history" in input) { const chain = contextualizedQuestion(input); return chain.pipe(retriever).pipe(formatDocumentsAsString); } return ""; }, }), qaPrompt, llm,]);
let chat_history = [];const question = "What is task decomposition?";const aiMsg = await ragChain.invoke({ question, chat_history });console.log(aiMsg);chat_history = chat_history.concat(aiMsg);const secondQuestion = "What are common ways of doing it?";await ragChain.invoke({ question: secondQuestion, chat_history });
AIMessage { lc_serializable: true, lc_kwargs: { content: "Task decomposition is a technique used to break down complex tasks into smaller and more manageable "... 278 more characters, additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Task decomposition is a technique used to break down complex tasks into smaller and more manageable "... 278 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }}
AIMessage { lc_serializable: true, lc_kwargs: { content: "Common ways of task decomposition include using prompting techniques like Chain of Thought (CoT) or "... 332 more characters, additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Common ways of task decomposition include using prompting techniques like Chain of Thought (CoT) or "... 332 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }}
See the first [LastSmith trace here](https://smith.langchain.com/public/527981c6-5018-4b68-a11a-ebcde77843e7/r) and the [second trace here](https://smith.langchain.com/public/7b97994a-ab9f-4bf3-a2e4-abb609e5610a/r)
Here we’ve gone over how to add application logic for incorporating historical outputs, but we’re still manually updating the chat history and inserting it into each input. In a real Q&A application we’ll want some way of persisting chat history and some way of automatically inserting and updating it.
For this we can use:
* [BaseChatMessageHistory](https://v02.api.js.langchain.com/classes/langchain_core_chat_history.BaseChatMessageHistory.html): Store chat history.
* [RunnableWithMessageHistory](/v0.2/docs/how_to/message_history/): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.
For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/v0.2/docs/how_to/message_history/) LCEL page.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build a Local RAG Application
](/v0.2/docs/tutorials/local_rag)[
Next
Build a Retrieval Augmented Generation (RAG) App
](/v0.2/docs/tutorials/rag)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Initial setup](#initial-setup)
* [Contextualizing the question](#contextualizing-the-question)
* [Chain with chat history](#chain-with-chat-history)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/rag | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Retrieval Augmented Generation (RAG) App
On this page
Build a Retrieval Augmented Generation (RAG) App
================================================
One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG.
This tutorial will show how to build a simple Q&A application over a text data source. Along the way we’ll go over a typical Q&A architecture and highlight additional resources for more advanced Q&A techniques. We’ll also see how LangSmith can help us trace and understand our application. LangSmith will become increasingly helpful as our application grows in complexity.
What is RAG?[](#what-is-rag "Direct link to What is RAG?")
-----------------------------------------------------------
RAG is a technique for augmenting LLM knowledge with additional data.
LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data up to a specific point in time that they were trained on. If you want to build AI applications that can reason about private data or data introduced after a model’s cutoff date, you need to augment the knowledge of the model with the specific information it needs. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG).
LangChain has a number of components designed to help build Q&A applications, and RAG applications more generally.
**Note**: Here we focus on Q&A for unstructured data. If you are interested for RAG over structured data, check out our tutorial on doing [question/answering over SQL data](/v0.2/docs/tutorials/sql_qa).
Concepts[](#concepts "Direct link to Concepts")
------------------------------------------------
A typical RAG application has two main components:
**Indexing**: a pipeline for ingesting data from a source and indexing it. _This usually happens offline._
**Retrieval and generation**: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.
The most common full sequence from raw data to answer looks like:
### Indexing[](#indexing "Direct link to Indexing")
1. **Load**: First we need to load our data. This is done with [DocumentLoaders](/v0.2/docs/concepts/#document-loaders).
2. **Split**: [Text splitters](/v0.2/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won’t fit in a model’s finite context window.
3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/v0.2/docs/concepts/#vectorstores) and [Embeddings](/v0.2/docs/concepts/#embedding-models) model.
![index_diagram](/v0.2/assets/images/rag_indexing-8160f90a90a33253d0154659cf7d453f.png)
### Retrieval and generation[](#retrieval-and-generation "Direct link to Retrieval and generation")
1. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/v0.2/docs/concepts/#retrievers).
2. **Generate**: A [ChatModel](/v0.2/docs/concepts/#chat-models) / [LLM](/v0.2/docs/concepts/#llms) produces an answer using a prompt that includes the question and the retrieved data
![retrieval_diagram](/v0.2/assets/images/rag_retrieval_generation-1046a4668d6bb08786ef73c56d4f228a.png)
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Installation[](#installation "Direct link to Installation")
To install LangChain run:
`bash npm2yarn npm i langchain`
For more details, see our [Installation guide](/v0.2/docs/how_to/installation).
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Preview[](#preview "Direct link to Preview")
---------------------------------------------
In this guide we’ll build a QA app over as website. The specific website we will use isthe [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng, which allows us to ask questions about the contents of the post.
We can create a simple indexing pipeline and RAG chain to do this in ~20 lines of code:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
Preview[](#preview-1 "Direct link to Preview")
-----------------------------------------------
In this guide we’ll build a QA app over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng, which allows us to ask questions about the contents of the post.
We can create a simple indexing pipeline and RAG chain to do this in only a few lines of code:
import "cheerio";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChain = await createStuffDocumentsChain({ llm, prompt, outputParser: new StringOutputParser(),});const retrievedDocs = await retriever.getRelevantDocuments( "what is task decomposition");
Let’s see what this prompt actually looks like:
console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer:
await ragChain.invoke({ question: "What is task decomposition?", context: retrievedDocs,});
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 259 more characters
Checkout [this LangSmith trace](https://smith.langchain.com/public/54cffec3-5c26-477d-b56d-ebb66a254c8e/r) of the chain above.
You can also construct the RAG chain above in a more declarative way using a `RunnableSequence`. `createStuffDocumentsChain` is basically a wrapper around `RunnableSequence`, so for more complex chains and customizability, you can use `RunnableSequence` directly.
import { formatDocumentsAsString } from "langchain/util/document";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const declarativeRagChain = RunnableSequence.from([ { context: retriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, llm, new StringOutputParser(),]);
await declarativeRagChain.invoke("What is task decomposition?");
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 208 more characters
LangSmith [trace](https://smith.langchain.com/public/c48e186c-c9da-4694-adf2-3a7c94362ec2/r).
Detailed walkthrough[](#detailed-walkthrough "Direct link to Detailed walkthrough")
------------------------------------------------------------------------------------
Let’s go through the above code step-by-step to really understand what’s going on.
1\. Indexing: Load[](#indexing-load "Direct link to 1. Indexing: Load")
------------------------------------------------------------------------
We need to first load the blog post contents. We can use [DocumentLoaders](/v0.2/docs/concepts#document-loaders) for this, which are objects that load in data from a source and return a list of [Documents](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html). A Document is an object with some pageContent (`string`) and metadata (`Record<string, any>`).
In this case we’ll use the [CheerioWebBaseLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_cheerio.CheerioWebBaseLoader.html), which uses cheerio to load HTML form web URLs and parse it to text. We can pass custom selectors to the constructor to only parse specific elements:
const pTagSelector = "p";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/", { selector: pTagSelector, });const docs = await loader.load();console.log(docs[0].pageContent.length);
22054
console.log(docs[0].pageContent);
Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.Chain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.Another quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.Self-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.ReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.The ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:In both experiments on knowledge-intensive tasks and decision-making tasks, ReAct works better than the Act-only baseline where Thought: … step is removed.Reflexion (Shinn & Labash 2023) is a framework to equips agents with dynamic memory and self-reflection capabilities to improve reasoning skills. Reflexion has a standard RL setup, in which the reward model provides a simple binary reward and the action space follows the setup in ReAct where the task-specific action space is augmented with language to enable complex reasoning steps. After each action $a_t$, the agent computes a heuristic $h_t$ and optionally may decide to reset the environment to start a new trial depending on the self-reflection results.The heuristic function determines when the trajectory is inefficient or contains hallucination and should be stopped. Inefficient planning refers to trajectories that take too long without success. Hallucination is defined as encountering a sequence of consecutive identical actions that lead to the same observation in the environment.Self-reflection is created by showing two-shot examples to LLM and each example is a pair of (failed trajectory, ideal reflection for guiding future changes in the plan). Then reflections are added into the agent’s working memory, up to three, to be used as context for querying LLM.Chain of Hindsight (CoH; Liu et al. 2023) encourages the model to improve on its own outputs by explicitly presenting it with a sequence of past outputs, each annotated with feedback. Human feedback data is a collection of $D_h = \{(x, y_i , r_i , z_i)\}_{i=1}^n$, where $x$ is the prompt, each $y_i$ is a model completion, $r_i$ is the human rating of $y_i$, and $z_i$ is the corresponding human-provided hindsight feedback. Assume the feedback tuples are ranked by reward, $r_n \geq r_{n-1} \geq \dots \geq r_1$ The process is supervised fine-tuning where the data is a sequence in the form of $\tau_h = (x, z_i, y_i, z_j, y_j, \dots, z_n, y_n)$, where $\leq i \leq j \leq n$. The model is finetuned to only predict $y_n$ where conditioned on the sequence prefix, such that the model can self-reflect to produce better output based on the feedback sequence. The model can optionally receive multiple rounds of instructions with human annotators at test time.To avoid overfitting, CoH adds a regularization term to maximize the log-likelihood of the pre-training dataset. To avoid shortcutting and copying (because there are many common words in feedback sequences), they randomly mask 0% - 5% of past tokens during training.The training dataset in their experiments is a combination of WebGPT comparisons, summarization from human feedback and human preference dataset.The idea of CoH is to present a history of sequentially improved outputs in context and train the model to take on the trend to produce better outputs. Algorithm Distillation (AD; Laskin et al. 2023) applies the same idea to cross-episode trajectories in reinforcement learning tasks, where an algorithm is encapsulated in a long history-conditioned policy. Considering that an agent interacts with the environment many times and in each episode the agent gets a little better, AD concatenates this learning history and feeds that into the model. Hence we should expect the next predicted action to lead to better performance than previous trials. The goal is to learn the process of RL instead of training a task-specific policy itself.The paper hypothesizes that any algorithm that generates a set of learning histories can be distilled into a neural network by performing behavioral cloning over actions. The history data is generated by a set of source policies, each trained for a specific task. At the training stage, during each RL run, a random task is sampled and a subsequence of multi-episode history is used for training, such that the learned policy is task-agnostic.In reality, the model has limited context window length, so episodes should be short enough to construct multi-episode history. Multi-episodic contexts of 2-4 episodes are necessary to learn a near-optimal in-context RL algorithm. The emergence of in-context RL requires long enough context.In comparison with three baselines, including ED (expert distillation, behavior cloning with expert trajectories instead of learning history), source policy (used for generating trajectories for distillation by UCB), RL^2 (Duan et al. 2017; used as upper bound since it needs online RL), AD demonstrates in-context RL with performance getting close to RL^2 despite only using offline RL and learns much faster than other baselines. When conditioned on partial training history of the source policy, AD also improves much faster than ED baseline.(Big thank you to ChatGPT for helping me draft this section. I’ve learned a lot about the human brain and data structure for fast MIPS in my conversations with ChatGPT.)Memory can be defined as the processes used to acquire, store, retain, and later retrieve information. There are several types of memory in human brains.Sensory Memory: This is the earliest stage of memory, providing the ability to retain impressions of sensory information (visual, auditory, etc) after the original stimuli have ended. Sensory memory typically only lasts for up to a few seconds. Subcategories include iconic memory (visual), echoic memory (auditory), and haptic memory (touch).Short-Term Memory (STM) or Working Memory: It stores information that we are currently aware of and needed to carry out complex cognitive tasks such as learning and reasoning. Short-term memory is believed to have the capacity of about 7 items (Miller 1956) and lasts for 20-30 seconds.Long-Term Memory (LTM): Long-term memory can store information for a remarkably long time, ranging from a few days to decades, with an essentially unlimited storage capacity. There are two subtypes of LTM:We can roughly consider the following mappings:The external memory can alleviate the restriction of finite attention span. A standard practice is to save the embedding representation of information into a vector store database that can support fast maximum inner-product search (MIPS). To optimize the retrieval speed, the common choice is the approximate nearest neighbors (ANN) algorithm to return approximately top k nearest neighbors to trade off a little accuracy lost for a huge speedup.A couple common choices of ANN algorithms for fast MIPS:Check more MIPS algorithms and performance comparison in ann-benchmarks.com.Tool use is a remarkable and distinguishing characteristic of human beings. We create, modify and utilize external objects to do things that go beyond our physical and cognitive limits. Equipping LLMs with external tools can significantly extend the model capabilities.MRKL (Karpas et al. 2022), short for “Modular Reasoning, Knowledge and Language”, is a neuro-symbolic architecture for autonomous agents. A MRKL system is proposed to contain a collection of “expert” modules and the general-purpose LLM works as a router to route inquiries to the best suitable expert module. These modules can be neural (e.g. deep learning models) or symbolic (e.g. math calculator, currency converter, weather API).They did an experiment on fine-tuning LLM to call a calculator, using arithmetic as a test case. Their experiments showed that it was harder to solve verbal math problems than explicitly stated math problems because LLMs (7B Jurassic1-large model) failed to extract the right arguments for the basic arithmetic reliably. The results highlight when the external symbolic tools can work reliably, knowing when to and how to use the tools are crucial, determined by the LLM capability.Both TALM (Tool Augmented Language Models; Parisi et al. 2022) and Toolformer (Schick et al. 2023) fine-tune a LM to learn to use external tool APIs. The dataset is expanded based on whether a newly added API call annotation can improve the quality of model outputs. See more details in the “External APIs” section of Prompt Engineering.ChatGPT Plugins and OpenAI API function calling are good examples of LLMs augmented with tool use capability working in practice. The collection of tool APIs can be provided by other developers (as in Plugins) or self-defined (as in function calls).HuggingGPT (Shen et al. 2023) is a framework to use ChatGPT as the task planner to select models available in HuggingFace platform according to the model descriptions and summarize the response based on the execution results.The system comprises of 4 stages:(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.Instruction:(2) Model selection: LLM distributes the tasks to expert models, where the request is framed as a multiple-choice question. LLM is presented with a list of models to choose from. Due to the limited context length, task type based filtration is needed.Instruction:(3) Task execution: Expert models execute on the specific tasks and log results.Instruction:(4) Response generation: LLM receives the execution results and provides summarized results to users.To put HuggingGPT into real world usage, a couple challenges need to solve: (1) Efficiency improvement is needed as both LLM inference rounds and interactions with other models slow down the process; (2) It relies on a long context window to communicate over complicated task content; (3) Stability improvement of LLM outputs and external model services.API-Bank (Li et al. 2023) is a benchmark for evaluating the performance of tool-augmented LLMs. It contains 53 commonly used API tools, a complete tool-augmented LLM workflow, and 264 annotated dialogues that involve 568 API calls. The selection of APIs is quite diverse, including search engines, calculator, calendar queries, smart home control, schedule management, health data management, account authentication workflow and more. Because there are a large number of APIs, LLM first has access to API search engine to find the right API to call and then uses the corresponding documentation to make a call.In the API-Bank workflow, LLMs need to make a couple of decisions and at each step we can evaluate how accurate that decision is. Decisions include:This benchmark evaluates the agent’s tool use capabilities at three levels:ChemCrow (Bran et al. 2023) is a domain-specific example in which LLM is augmented with 13 expert-designed tools to accomplish tasks across organic synthesis, drug discovery, and materials design. The workflow, implemented in LangChain, reflects what was previously described in the ReAct and MRKLs and combines CoT reasoning with tools relevant to the tasks:One interesting observation is that while the LLM-based evaluation concluded that GPT-4 and ChemCrow perform nearly equivalently, human evaluations with experts oriented towards the completion and chemical correctness of the solutions showed that ChemCrow outperforms GPT-4 by a large margin. This indicates a potential problem with using LLM to evaluate its own performance on domains that requires deep expertise. The lack of expertise may cause LLMs not knowing its flaws and thus cannot well judge the correctness of task results.Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.For example, when requested to "develop a novel anticancer drug", the model came up with the following reasoning steps:They also discussed the risks, especially with illicit drugs and bioweapons. They developed a test set containing a list of known chemical weapon agents and asked the agent to synthesize them. 4 out of 11 requests (36%) were accepted to obtain a synthesis solution and the agent attempted to consult documentation to execute the procedure. 7 out of 11 were rejected and among these 7 rejected cases, 5 happened after a Web search while 2 were rejected based on prompt only.Generative Agents (Park, et al. 2023) is super fun experiment where 25 virtual characters, each controlled by a LLM-powered agent, are living and interacting in a sandbox environment, inspired by The Sims. Generative agents create believable simulacra of human behavior for interactive applications.The design of generative agents combines LLM with memory, planning and reflection mechanisms to enable agents to behave conditioned on past experience, as well as to interact with other agents.This fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).AutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.Here is the system message used by AutoGPT, where {{...}} are user inputs:GPT-Engineer is another project to create a whole repository of code given a task specified in natural language. The GPT-Engineer is instructed to think over a list of smaller components to build and ask for user input to clarify questions as needed.Here are a sample conversation for task clarification sent to OpenAI ChatCompletion endpoint used by GPT-Engineer. The user inputs are wrapped in {{user input text}}.Then after these clarification, the agent moved into the code writing mode with a different system message.System message:Think step by step and reason yourself to the right decisions to make sure we get it right.You will first lay out the names of the core classes, functions, methods that will be necessary, as well as a quick comment on their purpose.Then you will output the content of each file including ALL code.Each file must strictly follow a markdown code block format, where the following tokens must be replaced such thatFILENAME is the lowercase file name including the file extension,LANG is the markup code block language for the code’s language, and CODE is the code:FILENAMEYou will start with the “entrypoint” file, then go to the ones that are imported by that file, and so on.Please note that the code should be fully functional. No placeholders.Follow a language and framework appropriate best practice file naming convention.Make sure that files contain all imports, types etc. Make sure that code in different files are compatible with each other.Ensure to implement all code, if you are unsure, write a plausible implementation.Include module dependency or package manager dependency definition file.Before you finish, double check that all parts of the architecture is present in the files.Useful to know:You almost always put different classes in different files.For Python, you always create an appropriate requirements.txt file.For NodeJS, you always create an appropriate package.json file.You always add a comment briefly describing the purpose of the function definition.You try to add comments explaining very complex bits of logic.You always follow the best practices for the requested languages in terms of describing the code written as a definedpackage/project.Python toolbelt preferences:Conversatin samples:After going through key ideas and demos of building LLM-centered agents, I start to see a couple common limitations:Finite context length: The restricted context capacity limits the inclusion of historical information, detailed instructions, API call context, and responses. The design of the system has to work with this limited communication bandwidth, while mechanisms like self-reflection to learn from past mistakes would benefit a lot from long or infinite context windows. Although vector stores and retrieval can provide access to a larger knowledge pool, their representation power is not as powerful as full attention.Challenges in long-term planning and task decomposition: Planning over a lengthy history and effectively exploring the solution space remain challenging. LLMs struggle to adjust plans when faced with unexpected errors, making them less robust compared to humans who learn from trial and error.Reliability of natural language interface: Current agent system relies on natural language as an interface between LLMs and external components such as memory and tools. However, the reliability of model outputs is questionable, as LLMs may make formatting errors and occasionally exhibit rebellious behavior (e.g. refuse to follow an instruction). Consequently, much of the agent demo code focuses on parsing model output.Cited as:Weng, Lilian. (Jun 2023). LLM-powered Autonomous Agents". Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.Or[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).[3] Liu et al. “Chain of Hindsight Aligns Language Models with Feedback“ arXiv preprint arXiv:2302.02676 (2023).[4] Liu et al. “LLM+P: Empowering Large Language Models with Optimal Planning Proficiency” arXiv preprint arXiv:2304.11477 (2023).[5] Yao et al. “ReAct: Synergizing reasoning and acting in language models.” ICLR 2023.[6] Google Blog. “Announcing ScaNN: Efficient Vector Similarity Search” July 28, 2020.[7] https://chat.openai.com/share/46ff149e-a4c7-4dd7-a800-fc4a642ea389[8] Shinn & Labash. “Reflexion: an autonomous agent with dynamic memory and self-reflection” arXiv preprint arXiv:2303.11366 (2023).[9] Laskin et al. “In-context Reinforcement Learning with Algorithm Distillation” ICLR 2023.[10] Karpas et al. “MRKL Systems A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning.” arXiv preprint arXiv:2205.00445 (2022).[11] Weaviate Blog. Why is Vector Search so fast? Sep 13, 2022.[12] Li et al. “API-Bank: A Benchmark for Tool-Augmented LLMs” arXiv preprint arXiv:2304.08244 (2023).[13] Shen et al. “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace” arXiv preprint arXiv:2303.17580 (2023).[14] Bran et al. “ChemCrow: Augmenting large-language models with chemistry tools.” arXiv preprint arXiv:2304.05376 (2023).[15] Boiko et al. “Emergent autonomous scientific research capabilities of large language models.” arXiv preprint arXiv:2304.05332 (2023).[16] Joon Sung Park, et al. “Generative Agents: Interactive Simulacra of Human Behavior.” arXiv preprint arXiv:2304.03442 (2023).[17] AutoGPT. https://github.com/Significant-Gravitas/Auto-GPT[18] GPT-Engineer. https://github.com/AntonOsika/gpt-engineer
### Go deeper[](#go-deeper "Direct link to Go deeper")
`DocumentLoader`: Class that loads data from a source as list of Documents. - [Docs](/v0.2/docs/concepts#document-loaders): Detailed documentation on how to use
`DocumentLoaders`. - [Integrations](/v0.2/docs/integrations/document_loaders/) - [Interface](https://v02.api.js.langchain.com/classes/langchain_document_loaders_base.BaseDocumentLoader.html): API reference for the base interface.
2\. Indexing: Split[](#indexing-split "Direct link to 2. Indexing: Split")
---------------------------------------------------------------------------
Our loaded document is over 42k characters long. This is too long to fit in the context window of many models. Even for those models that could fit the full post in their context window, models can struggle to find information in very long inputs.
To handle this we’ll split the `Document` into chunks for embedding and vector storage. This should help us retrieve only the most relevant bits of the blog post at run time.
In this case we’ll split our documents into chunks of 1000 characters with 200 characters of overlap between chunks. The overlap helps mitigate the possibility of separating a statement from important context related to it. We use the [RecursiveCharacterTextSplitter](/v0.2/docs/how_to/recursive_text_splitter/), which will recursively split the document using common separators like new lines until each chunk is the appropriate size. This is the recommended text splitter for generic text use cases.
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const allSplits = await textSplitter.splitDocuments(docs);
console.log(allSplits.length);
28
console.log(allSplits[0].pageContent.length);
996
allSplits[10].metadata;
{ source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: { from: 1, to: 1 } }}
### Go deeper[](#go-deeper-1 "Direct link to Go deeper")
`TextSplitter`: Object that splits a list of `Document`s into smaller chunks. Subclass of `DocumentTransformers`. - Explore `Context-aware splitters`, which keep the location (“context”) of each split in the original `Document`: - [Markdown files](/v0.2/docs/how_to/code_splitter/#markdown) - [Code](/v0.2/docs/how_to/code_splitter/) (15+ langs) - [Interface](https://v02.api.js.langchain.com/classes/langchain_text_splitter.TextSplitter.html): API reference for the base interface.
`DocumentTransformer`: Object that performs a transformation on a list of `Document`s. - Docs: Detailed documentation on how to use `DocumentTransformer`s - [Integrations](/v0.2/docs/integrations/document_transformers) - [Interface](https://v02.api.js.langchain.com/modules/langchain_schema_document.html#BaseDocumentTransformer): API reference for the base interface.
3\. Indexing: Store[](#indexing-store "Direct link to 3. Indexing: Store")
---------------------------------------------------------------------------
Now we need to index our 28 text chunks so that we can search over them at runtime. The most common way to do this is to embed the contents of each document split and insert these embeddings into a vector database (or vector store). When we want to search over our splits, we take a text search query, embed it, and perform some sort of “similarity” search to identify the stored splits with the most similar embeddings to our query embedding. The simplest similarity measure is cosine similarity — we measure the cosine of the angle between each pair of embeddings (which are high dimensional vectors).
We can embed and store all of our document splits in a single command using the [Memory](/v0.2/docs/integrations/vectorstores/memory) vector store and [OpenAIEmbeddings](/v0.2/docs/integrations/text_embedding/openai) model.
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MemoryVectorStore.fromDocuments( allSplits, new OpenAIEmbeddings());
### Go deeper[](#go-deeper-2 "Direct link to Go deeper")
`Embeddings`: Wrapper around a text embedding model, used for converting text to embeddings. - [Docs](/v0.2/docs/concepts#embedding-models): Detailed documentation on how to use embeddings. - [Integrations](/v0.2/docs/integrations/text_embedding): 30+ integrations to choose from. - [Interface](https://v02.api.js.langchain.com/classes/langchain_core_embeddings.Embeddings.html): API reference for the base interface.
`VectorStore`: Wrapper around a vector database, used for storing and querying embeddings. - [Docs](/v0.2/docs/concepts#vectorstores): Detailed documentation on how to use vector stores. - [Integrations](/v0.2/docs/integrations/vectorstores): 40+ integrations to choose from. - [Interface](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStore.html): API reference for the base interface.
This completes the **Indexing** portion of the pipeline. At this point we have a query-able vector store containing the chunked contents of our blog post. Given a user question, we should ideally be able to return the snippets of the blog post that answer the question.
4\. Retrieval and Generation: Retrieve[](#retrieval-and-generation-retrieve "Direct link to 4. Retrieval and Generation: Retrieve")
------------------------------------------------------------------------------------------------------------------------------------
Now let’s write the actual application logic. We want to create a simple application that takes a user question, searches for documents relevant to that question, passes the retrieved documents and initial question to a model, and returns an answer.
First we need to define our logic for searching over documents. LangChain defines a [Retriever](/v0.2/docs/concepts#retrievers) interface which wraps an index that can return relevant `Document`s given a string query.
The most common type of Retriever is the [VectorStoreRetriever](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStoreRetriever.html), which uses the similarity search capabilities of a vector store to facilitate retrieval. Any `VectorStore` can easily be turned into a `Retriever` with `VectorStore.asRetriever()`:
const retriever = vectorStore.asRetriever({ k: 6, searchType: "similarity" });
const retrievedDocs = await retriever.invoke( "What are the approaches to task decomposition?");
console.log(retrievedDocs.length);
6
console.log(retrievedDocs[0].pageContent);
hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.Another quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain
### Go deeper[](#go-deeper-3 "Direct link to Go deeper")
Vector stores are commonly used for retrieval, but there are other ways to do retrieval, too.
`Retriever`: An object that returns `Document`s given a text query - [Docs](/v0.2/docs/concepts#retrievers): Further documentation on the interface and built-in retrieval techniques. Some of which include: - `MultiQueryRetriever` [generates variants of the input question](/v0.2/docs/how_to/multiple_queries/) to improve retrieval hit rate. - `MultiVectorRetriever` (diagram below) instead generates variants of the embeddings, also in order to improve retrieval hit rate. - Max marginal relevance selects for relevance and diversity among the retrieved documents to avoid passing in duplicate context. - Documents can be filtered during vector store retrieval using metadata filters. - Integrations: Integrations with retrieval services. - Interface: API reference for the base interface.
5\. Retrieval and Generation: Generate[](#retrieval-and-generation-generate "Direct link to 5. Retrieval and Generation: Generate")
------------------------------------------------------------------------------------------------------------------------------------
Let’s put it all together into a chain that takes a question, retrieves relevant documents, constructs a prompt, passes that to a model, and parses the output.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
We’ll use a prompt for RAG that is checked into the LangChain prompt hub ([here](https://smith.langchain.com/hub/rlm/rag-prompt?organizationId=9213bdc8-a184-442b-901a-cd86ebf8ca6f)).
import { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const exampleMessages = await prompt.invoke({ context: "filler context", question: "filler question",});exampleMessages;
ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to "... 197 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to "... 197 more characters, name: undefined, additional_kwargs: {} } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ HumanMessage { lc_serializable: true, lc_kwargs: { content: "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to "... 197 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to "... 197 more characters, name: undefined, additional_kwargs: {} } ]}
console.log(exampleMessages.messages[0].content);
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: filler questionContext: filler contextAnswer:
We’ll use the [LCEL Runnable](/v0.2/docs/how_to/#langchain-expression-language-lcel) protocol to define the chain, allowing us to - pipe together components and functions in a transparent way - automatically trace our chain in LangSmith - get streaming, async, and batched calling out of the box
import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const ragChain = RunnableSequence.from([ { context: retriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, llm, new StringOutputParser(),]);
for await (const chunk of await ragChain.stream( "What is task decomposition?")) { console.log(chunk);}
Task decomposition is the process of breaking down a complex task into smaller and simpler steps. It allows for easier management and interpretation of the model's thinking process. Different approaches, such as Chain of Thought (CoT) and Tree of Thoughts, can be used to decompose tasks and explore multiple reasoning possibilities at each step.
Checkout the LangSmith trace [here](https://smith.langchain.com/public/6f89b333-de55-4ac2-9d93-ea32d41c9e71/r)
### Go deeper[](#go-deeper-4 "Direct link to Go deeper")
#### Choosing a model[](#choosing-a-model "Direct link to Choosing a model")
`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages and returns a message. - [Docs](/v0.2/docs/concepts/#chat-models): Detailed documentation on - [Integrations](/v0.2/docs/integrations/chat/): 25+ integrations to choose from. - [Interface](https://v02.api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html): API reference for the base interface.
`LLM`: A text-in-text-out LLM. Takes in a string and returns a string. - [Docs](/v0.2/docs/concepts#llms) - [Integrations](/v0.2/docs/integrations/llms/): 75+ integrations to choose from. - [Interface](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.BaseLLM.html): API reference for the base interface.
See a guide on RAG with locally-running models [here](/v0.2/docs/tutorials/local_rag/).
#### Customizing the prompt[](#customizing-the-prompt "Direct link to Customizing the prompt")
As shown above, we can load prompts (e.g., [this RAG prompt](https://smith.langchain.com/hub/rlm/rag-prompt?organizationId=9213bdc8-a184-442b-901a-cd86ebf8ca6f)) from the prompt hub. The prompt can also be easily customized:
import { PromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const template = `Use the following pieces of context to answer the question at the end.If you don't know the answer, just say that you don't know, don't try to make up an answer.Use three sentences maximum and keep the answer as concise as possible.Always say "thanks for asking!" at the end of the answer.{context}Question: {question}Helpful Answer:`;const customRagPrompt = PromptTemplate.fromTemplate(template);const ragChain = await createStuffDocumentsChain({ llm, prompt: customRagPrompt, outputParser: new StringOutputParser(),});const context = await retriever.getRelevantDocuments( "what is task decomposition");await ragChain.invoke({ question: "What is Task Decomposition?", context,});
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 336 more characters
Checkout the LangSmith trace [here](https://smith.langchain.com/public/47ef2e53-acec-4b74-acdc-e0ea64088279/r)
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
That’s a lot of content we’ve covered in a short amount of time. There’s plenty of features, integrations, and extensions to explore in each of the above sections. Along from the Go deeper sources mentioned above, good next steps include:
* [Return sources](/v0.2/docs/how_to/qa_sources/): Learn how to return source documents
* [Streaming](/v0.2/docs/how_to/qa_streaming/): Learn how to stream outputs and intermediate steps
* [Add chat history](/v0.2/docs/how_to/qa_chat_history_how_to/): Learn how to add chat history to your app
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Conversational RAG
](/v0.2/docs/tutorials/qa_chat_history)[
Next
Build a Question/Answering system over SQL data
](/v0.2/docs/tutorials/sql_qa)
* [What is RAG?](#what-is-rag)
* [Concepts](#concepts)
* [Indexing](#indexing)
* [Retrieval and generation](#retrieval-and-generation)
* [Setup](#setup)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Preview](#preview)
* [Preview](#preview-1)
* [Detailed walkthrough](#detailed-walkthrough)
* [1\. Indexing: Load](#indexing-load)
* [Go deeper](#go-deeper)
* [2\. Indexing: Split](#indexing-split)
* [Go deeper](#go-deeper-1)
* [3\. Indexing: Store](#indexing-store)
* [Go deeper](#go-deeper-2)
* [4\. Retrieval and Generation: Retrieve](#retrieval-and-generation-retrieve)
* [Go deeper](#go-deeper-3)
* [5\. Retrieval and Generation: Generate](#retrieval-and-generation-generate)
* [Go deeper](#go-deeper-4)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* How-to guides
On this page
How-to guides
=============
Here you'll find answers to “How do I….?” types of questions. These guides are _goal-oriented_ and _concrete_; they're meant to help you complete a specific task. For conceptual explanations see [Conceptual Guides](/v0.2/docs/concepts/). For end-to-end walkthroughs see [Tutorials](/v0.2/docs/tutorials). For comprehensive descriptions of every class and function see [API Reference](https://v2.v02.api.js.langchain.com/).
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
* [How to: install LangChain packages](/v0.2/docs/how_to/installation/)
Key features[](#key-features "Direct link to Key features")
------------------------------------------------------------
This highlights functionality that is core to using LangChain.
* [How to: return structured data from an LLM](/v0.2/docs/how_to/structured_output/)
* [How to: use a chat model to call tools](/v0.2/docs/how_to/tool_calling/)
* [How to: stream runnables](/v0.2/docs/how_to/streaming)
* [How to: debug your LLM apps](/v0.2/docs/how_to/debugging/)
LangChain Expression Language (LCEL)[](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
----------------------------------------------------------------------------------------------------------------------------------
LangChain Expression Language is a way to create arbitrary custom chains. It is built on the Runnable protocol.
* [How to: chain runnables](/v0.2/docs/how_to/sequence)
* [How to: stream runnables](/v0.2/docs/how_to/streaming)
* [How to: invoke runnables in parallel](/v0.2/docs/how_to/parallel/)
* [How to: attach runtime arguments to a runnable](/v0.2/docs/how_to/binding/)
* [How to: run custom functions](/v0.2/docs/how_to/functions)
* [How to: pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to: add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to: add message history](/v0.2/docs/how_to/message_history)
* [How to: route execution within a chain](/v0.2/docs/how_to/routing)
* [How to: add fallbacks](/v0.2/docs/how_to/fallbacks)
Components[](#components "Direct link to Components")
------------------------------------------------------
These are the core building blocks you can use when building applications.
### Prompt templates[](#prompt-templates "Direct link to Prompt templates")
Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.
* [How to: use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to: use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat/)
* [How to: partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to: compose prompts together](/v0.2/docs/how_to/prompts_composition)
### Example selectors[](#example-selectors "Direct link to Example selectors")
Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.
* [How to: use example selectors](/v0.2/docs/how_to/example_selectors)
* [How to: select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to: select examples by semantic similarity](/v0.2/docs/how_to/example_selectors_similarity)
### Chat models[](#chat-models "Direct link to Chat models")
Chat Models are newer forms of language models that take messages in and output a message.
* [How to: do function/tool calling](/v0.2/docs/how_to/tool_calling)
* [How to: get models to return structured output](/v0.2/docs/how_to/structured_output)
* [How to: cache model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to: create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to: get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to: stream a response back](/v0.2/docs/how_to/chat_streaming)
* [How to: track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
### LLMs[](#llms "Direct link to LLMs")
What LangChain calls LLMs are older forms of language models that take a string in and output a string.
* [How to: cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to: create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to: stream a response back](/v0.2/docs/how_to/streaming_llm)
* [How to: track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
### Output parsers[](#output-parsers "Direct link to Output parsers")
Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.
* [How to: use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to: parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to: parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to: retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
### Document loaders[](#document-loaders "Direct link to Document loaders")
Document Loaders are responsible for loading documents from a variety of sources.
* [How to: load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to: load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to: load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to: write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
### Text splitters[](#text-splitters "Direct link to Text splitters")
Text Splitters take a document and split into chunks that can be used for retrieval.
* [How to: recursively split text](/v0.2/docs/how_to/recursive_text_splitter)
* [How to: split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to: split code](/v0.2/docs/how_to/code_splitter)
* [How to: split by tokens](/v0.2/docs/how_to/split_by_token)
### Embedding models[](#embedding-models "Direct link to Embedding models")
Embedding Models take a piece of text and create a numerical representation of it.
* [How to: embed text data](/v0.2/docs/how_to/embed_text)
* [How to: cache embedding results](/v0.2/docs/how_to/caching_embeddings)
### Vector stores[](#vector-stores "Direct link to Vector stores")
Vector stores are databases that can efficiently store and retrieve embeddings.
* [How to: create and query vector stores](/v0.2/docs/how_to/vectorstores)
### Retrievers[](#retrievers "Direct link to Retrievers")
Retrievers are responsible for taking a query and returning relevant documents.
* [How to: use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to: generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to: use contextual compression to compress the data retrieved](/v0.2/docs/how_to/contextual_compression)
* [How to: write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to: generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to: retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to: create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to: reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
### Indexing[](#indexing "Direct link to Indexing")
Indexing is the process of keeping your vectorstore in-sync with the underlying data source.
* [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
### Tools[](#tools "Direct link to Tools")
LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call).
* [How to: create custom tools](/v0.2/docs/how_to/custom_tools)
* [How to: use built-in tools and built-in toolkits](/v0.2/docs/how_to/tools_builtin)
* [How to: use a chat model to call tools](/v0.2/docs/how_to/tool_calling/)
* [How to: add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to: call tools using multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
### Agents[](#agents "Direct link to Agents")
note
For in depth how-to guides for agents, please check out [LangGraph](https://langchain-ai.github.io/langgraphjs/) documentation.
* [How to: use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
### Custom[](#custom "Direct link to Custom")
All of LangChain components can easily be extended to support your own versions.
* [How to: create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to: write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to: write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to: define a custom tool](/v0.2/docs/how_to/custom_tools)
Use cases[](#use-cases "Direct link to Use cases")
---------------------------------------------------
These guides cover use-case specific details.
### Q&A with RAG[](#qa-with-rag "Direct link to Q&A with RAG")
Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.
* [How to: add chat history](/v0.2/docs/how_to/qa_chat_history_how_to/)
* [How to: stream](/v0.2/docs/how_to/qa_streaming/)
* [How to: return sources](/v0.2/docs/how_to/qa_sources/)
* [How to: return citations](/v0.2/docs/how_to/qa_citations/)
* [How to: do per-user retrieval](/v0.2/docs/how_to/qa_per_user/)
### Extraction[](#extraction "Direct link to Extraction")
Extraction is when you use LLMs to extract structured information from unstructured text.
* [How to: use reference examples](/v0.2/docs/how_to/extraction_examples/)
* [How to: handle long text](/v0.2/docs/how_to/extraction_long_text/)
* [How to: do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
### Chatbots[](#chatbots "Direct link to Chatbots")
Chatbots involve using an LLM to have a conversation.
* [How to: manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to: do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to: use tools](/v0.2/docs/how_to/chatbots_tools)
### Query analysis[](#query-analysis "Direct link to Query analysis")
Query Analysis is the task of using an LLM to generate a query to send to a retriever.
* [How to: add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to: handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to: handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to: handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to: construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to: deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
### Q&A over SQL + CSV[](#qa-over-sql--csv "Direct link to Q&A over SQL + CSV")
You can use LLMs to do question answering over tabular data.
* [How to: use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to: do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to: deal with large databases](/v0.2/docs/how_to/sql_large_db)
### Q&A over graph databases[](#qa-over-graph-databases "Direct link to Q&A over graph databases")
You can use an LLM to do question answering over graph databases.
* [How to: map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to: add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to: improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to: construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build a Question/Answering system over SQL data
](/v0.2/docs/tutorials/sql_qa)[
Next
How-to guides
](/v0.2/docs/how_to/)
* [Installation](#installation)
* [Key features](#key-features)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Components](#components)
* [Prompt templates](#prompt-templates)
* [Example selectors](#example-selectors)
* [Chat models](#chat-models)
* [LLMs](#llms)
* [Output parsers](#output-parsers)
* [Document loaders](#document-loaders)
* [Text splitters](#text-splitters)
* [Embedding models](#embedding-models)
* [Vector stores](#vector-stores)
* [Retrievers](#retrievers)
* [Indexing](#indexing)
* [Tools](#tools)
* [Agents](#agents)
* [Custom](#custom)
* [Use cases](#use-cases)
* [Q&A with RAG](#qa-with-rag)
* [Extraction](#extraction)
* [Chatbots](#chatbots)
* [Query analysis](#query-analysis)
* [Q&A over SQL + CSV](#qa-over-sql--csv)
* [Q&A over graph databases](#qa-over-graph-databases)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/tutorials/sql_qa | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [Tutorials](/v0.2/docs/tutorials/)
* Build a Question/Answering system over SQL data
On this page
Build a Question/Answering system over SQL data
===============================================
In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question.
⚠️ Security note ⚠️[](#️-security-note-️ "Direct link to ⚠️ Security note ⚠️")
-------------------------------------------------------------------------------
Building Q&A systems of SQL databases can require executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent's needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, see [here](/v0.2/docs/security).
Architecture[](#architecture "Direct link to Architecture")
------------------------------------------------------------
At a high-level, the steps of most SQL chain and agent are:
1. **Convert question to SQL query**: Model converts user input to a SQL query.
2. **Execute SQL query**: Execute the SQL query
3. **Answer the question**: Model responds to user input using the query results.
![SQL Use Case Diagram](/v0.2/assets/images/sql_usecase-d432701261f05ab69b38576093718cf3.png)
⚠️ Security note ⚠️[](#️-security-note-️-1 "Direct link to ⚠️ Security note ⚠️")
---------------------------------------------------------------------------------
Building Q&A systems of SQL databases can require executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent's needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, see [here](/v0.2/docs/security).
Architecture[](#architecture-1 "Direct link to Architecture")
--------------------------------------------------------------
At a high-level, the steps of most SQL chain and agent are:
1. **Convert question to SQL query**: Model converts user input to a SQL query.
2. **Execute SQL query**: Execute the SQL query
3. **Answer the question**: Model responds to user input using the query results.
![SQL Use Case Diagram](/v0.2/assets/images/sql_usecase-d432701261f05ab69b38576093718cf3.png)
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm i langchain @langchain/community @langchain/openai
yarn add langchain @langchain/community @langchain/openai
pnpm add langchain @langchain/community @langchain/openai
We default to OpenAI models in this guide.
export OPENAI_API_KEY=<your key># Uncomment the below to use LangSmith. Not required, but recommended for debugging and observability.# export LANGCHAIN_API_KEY=<your key># export LANGCHAIN_TRACING_V2=true
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Great! We've got a SQL database that we can query. Now let's try hooking it up to an LLM.
Chain[](#chain "Direct link to Chain")
---------------------------------------
Let's create a simple chain that takes a question, turns it into a SQL query, executes the query, and uses the result to answer the original question.
### Convert question to SQL query[](#convert-question-to-sql-query "Direct link to Convert question to SQL query")
The first step in a SQL chain or agent is to take the user input and convert it to a SQL query. LangChain comes with a built-in chain for this: [`createSqlQueryChain`](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html)
import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const chain = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const response = await chain.invoke({ question: "How many employees are there?",});console.log("response", response);/**response SELECT COUNT(*) FROM "Employee" */console.log("db run result", await db.run(response));/**db run result [{"COUNT(*)":8}] */
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
We can look at the [LangSmith trace](https://smith.langchain.com/public/6d8f0213-9f02-498e-aeb2-ec774e324e2c/r) to get a better understanding of what this chain is doing. We can also inspect the chain directly for its prompts. Looking at the prompt (below), we can see that it is:
* Dialect-specific. In this case it references SQLite explicitly.
* Has definitions for all the available tables.
* Has three examples rows for each table.
This technique is inspired by papers like [this](https://arxiv.org/pdf/2204.00498.pdf), which suggest showing examples rows and being explicit about tables improves performance. We can also inspect the full prompt via the LangSmith trace:
![Chain Prompt](/v0.2/assets/images/sql_quickstart_langsmith_prompt-e90559eddd490ceee277642d9e76b37b.png)
### Execute SQL query[](#execute-sql-query "Direct link to Execute SQL query")
Now that we've generated a SQL query, we'll want to execute it. This is the most dangerous part of creating a SQL chain. Consider carefully if it is OK to run automated queries over your data. Minimize the database connection permissions as much as possible. Consider adding a human approval step to you chains before query execution (see below).
We can use the [`QuerySqlTool`](https://v02.api.js.langchain.com/classes/langchain_tools_sql.QuerySqlTool.html) to easily add query execution to our chain:
import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { QuerySqlTool } from "langchain/tools/sql";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const executeQuery = new QuerySqlTool(db);const writeQuery = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const chain = writeQuery.pipe(executeQuery);console.log(await chain.invoke({ question: "How many employees are there" }));/**[{"COUNT(*)":8}] */
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [QuerySqlTool](https://v02.api.js.langchain.com/classes/langchain_tools_sql.QuerySqlTool.html) from `langchain/tools/sql`
tip
See a LangSmith trace of the chain above [here](https://smith.langchain.com/public/3cbcf6f2-a55b-4701-a2e3-9928e4747328/r).
### Answer the question[](#answer-the-question "Direct link to Answer the question")
Now that we have a way to automatically generate and execute queries, we just need to combine the original question and SQL query result to generate a final answer. We can do this by passing question and result to the LLM once more:
import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { QuerySqlTool } from "langchain/tools/sql";import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const executeQuery = new QuerySqlTool(db);const writeQuery = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const answerPrompt = PromptTemplate.fromTemplate(`Given the following user question, corresponding SQL query, and SQL result, answer the user question.Question: {question}SQL Query: {query}SQL Result: {result}Answer: `);const answerChain = answerPrompt.pipe(llm).pipe(new StringOutputParser());const chain = RunnableSequence.from([ RunnablePassthrough.assign({ query: writeQuery }).assign({ result: (i: { query: string }) => executeQuery.invoke(i.query), }), answerChain,]);console.log(await chain.invoke({ question: "How many employees are there" }));/**There are 8 employees. */
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [QuerySqlTool](https://v02.api.js.langchain.com/classes/langchain_tools_sql.QuerySqlTool.html) from `langchain/tools/sql`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
tip
See a LangSmith trace of the chain above [here](https://smith.langchain.com/public/d130ce1f-1fce-4192-921e-4b522884ec1a/r).
### Next steps[](#next-steps "Direct link to Next steps")
For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:
* [Prompting strategies](/v0.2/docs/how_to/sql_prompting): Advanced prompt engineering techniques.
* [Query checking](/v0.2/docs/how_to/sql_query_checking): Add query validation and error handling.
* [Large databases](/v0.2/docs/how_to/sql_large_db): Techniques for working with large databases.
Agents[](#agents "Direct link to Agents")
------------------------------------------
LangChain offers a number of tools and functions that allow you to create SQL Agents which can provide a more flexible way of interacting with SQL databases. The main advantages of using SQL Agents are:
* It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).
* It can recover from errors by running a generated query, catching the traceback and regenerating it correctly.
* It can answer questions that require multiple dependent queries.
* It will save tokens by only considering the schema from relevant tables.
* To initialize the agent, we use [`createOpenAIToolsAgent`](https://v02.api.js.langchain.com/functions/langchain_agents.createOpenAIToolsAgent.html) function. This agent contains the [`SqlToolkit`](https://v02.api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) which contains tools to:
* Create and execute queries
* Check query syntax
* Retrieve table descriptions
* … and more
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Build a Retrieval Augmented Generation (RAG) App
](/v0.2/docs/tutorials/rag)[
Next
How-to guides
](/v0.2/docs/how_to/)
* [⚠️ Security note ⚠️](#️-security-note-️)
* [Architecture](#architecture)
* [⚠️ Security note ⚠️](#️-security-note-️-1)
* [Architecture](#architecture-1)
* [Setup](#setup)
* [Chain](#chain)
* [Convert question to SQL query](#convert-question-to-sql-query)
* [Execute SQL query](#execute-sql-query)
* [Answer the question](#answer-the-question)
* [Next steps](#next-steps)
* [Agents](#agents)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/streaming_llm | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream responses from an LLM
On this page
How to stream responses from an LLM
===================================
All [`LLM`s](https://v02.api.js.langchain.com/classes/langchain_core_language_models_llms.BaseLLM.html) implement the [Runnable interface](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html), which comes with **default** implementations of standard runnable methods (i.e. `ainvoke`, `batch`, `abatch`, `stream`, `astream`, `astream_events`).
The **default** streaming implementations provide an `AsyncGenerator` that yields a single value: the final output from the underlying chat model provider.
The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support.
See which [integrations support token-by-token streaming here](/v0.2/docs/integrations/llms/).
:::{.callout-note}
The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface.
:::
Using `.stream()`[](#using-stream "Direct link to using-stream")
-----------------------------------------------------------------
The easiest way to stream is to use the `.stream()` method. This returns an readable stream that you can also iterate over:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ maxTokens: 25,});const stream = await model.stream("Tell me a joke.");for await (const chunk of stream) { console.log(chunk);}/*Q: What did the fish say when it hit the wall?A: Dam!*/
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
For models that do not support streaming, the entire response will be returned as a single chunk.
Using a callback handler[](#using-a-callback-handler "Direct link to Using a callback handler")
------------------------------------------------------------------------------------------------
You can also use a [`CallbackHandler`](https://v02.api.js.langchain.com/classes/langchain_core_callbacks_base.BaseCallbackHandler.html) like so:
import { OpenAI } from "@langchain/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.invoke("Tell me a joke.", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide.*/
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
We still have access to the end `LLMResult` if using `generate`. However, `tokenUsage` may not be currently supported for all model providers when streaming.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Installation
](/v0.2/docs/how_to/installation)[
Next
How to stream chat model responses
](/v0.2/docs/how_to/chat_streaming)
* [Using `.stream()`](#using-stream)
* [Using a callback handler](#using-a-callback-handler)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/example_selectors | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use example selectors
On this page
How to use example selectors
============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Few-shot examples](/v0.2/docs/how_to/few_shot_examples)
If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector { addExample(example: Example): Promise<void | string>; selectExamples(input_variables: Example): Promise<Example[]>;}
The only method it needs to define is a `selectExamples` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.
LangChain has a few different types of example selectors. For an overview of all these types, see the below table.
In this guide, we will walk through creating a custom example selector.
Examples[](#examples "Direct link to Examples")
------------------------------------------------
In order to use an example selector, we need to create a list of examples. These should generally be example inputs and outputs. For this demo purpose, let’s imagine we are selecting examples of how to translate English to Italian.
const examples = [ { input: "hi", output: "ciao" }, { input: "bye", output: "arrivaderci" }, { input: "soccer", output: "calcio" },];
Custom Example Selector[](#custom-example-selector "Direct link to Custom Example Selector")
---------------------------------------------------------------------------------------------
Let’s write an example selector that chooses what example to pick based on the length of the word.
import { BaseExampleSelector } from "@langchain/core/example_selectors";import { Example } from "@langchain/core/prompts";class CustomExampleSelector extends BaseExampleSelector { private examples: Example[]; constructor(examples: Example[]) { super(); this.examples = examples; } async addExample(example: Example): Promise<void | string> { this.examples.push(example); return; } async selectExamples(inputVariables: Example): Promise<Example[]> { // This assumes knowledge that part of the input will be a 'text' key const newWord = inputVariables.input; const newWordLength = newWord.length; // Initialize variables to store the best match and its length difference let bestMatch: Example | null = null; let smallestDiff = Infinity; // Iterate through each example for (const example of this.examples) { // Calculate the length difference with the first word of the example const currentDiff = Math.abs(example.input.length - newWordLength); // Update the best match if the current one is closer in length if (currentDiff < smallestDiff) { smallestDiff = currentDiff; bestMatch = example; } } return bestMatch ? [bestMatch] : []; }}
const exampleSelector = new CustomExampleSelector(examples);
await exampleSelector.selectExamples({ input: "okay" });
[ { input: "bye", output: "arrivaderci" } ]
await exampleSelector.addExample({ input: "hand", output: "mano" });
await exampleSelector.selectExamples({ input: "okay" });
[ { input: "hand", output: "mano" } ]
Use in a Prompt[](#use-in-a-prompt "Direct link to Use in a Prompt")
---------------------------------------------------------------------
We can now use this example selector in a prompt
import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";const examplePrompt = PromptTemplate.fromTemplate( "Input: {input} -> Output: {output}");
const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, suffix: "Input: {input} -> Output:", prefix: "Translate the following words from English to Italain:", inputVariables: ["input"],});console.log(await prompt.format({ input: "word" }));
Translate the following words from English to Italain:Input: hand -> Output: manoInput: word -> Output:
Example Selector Types[](#example-selector-types "Direct link to Example Selector Types")
------------------------------------------------------------------------------------------
Name
Description
Similarity
Uses semantic similarity between inputs and examples to decide which examples to choose.
Length
Selects examples based on how many can fit within a certain length
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned a bit about using example selectors to few shot LLMs.
Next, check out some guides on some other techniques for selecting examples:
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How-to guides
](/v0.2/docs/how_to/)[
Next
Installation
](/v0.2/docs/how_to/installation)
* [Examples](#examples)
* [Custom Example Selector](#custom-example-selector)
* [Use in a Prompt](#use-in-a-prompt)
* [Example Selector Types](#example-selector-types)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/installation | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* Installation
On this page
Installation
============
Supported Environments[](#supported-environments "Direct link to Supported Environments")
------------------------------------------------------------------------------------------
LangChain is written in TypeScript and can be used in:
* Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x
* Cloudflare Workers
* Vercel / Next.js (Browser, Serverless and Edge functions)
* Supabase Edge Functions
* Browser
* Deno
* Bun
However, note that individual integrations may not be supported in all environments.
Installation[](#installation-1 "Direct link to Installation")
--------------------------------------------------------------
To install LangChain, run the following command:
* npm
* Yarn
* pnpm
npm install langchain
yarn add langchain
pnpm add langchain
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc. By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately.
### @langchain/community[](#langchaincommunity "Direct link to @langchain/community")
The [@langchain/community](https://www.npmjs.com/package/@langchain/community) package contains a range of third-party integrations. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
There are also more granular packages containing LangChain integrations for individual providers.
### @langchain/core[](#langchaincore "Direct link to @langchain/core")
The [@langchain/core](https://www.npmjs.com/package/@langchain/core) package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed along with `langchain`, but can also be used separately. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
### LangGraph[](#langgraph "Direct link to LangGraph")
`langgraph` is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/langgraph
yarn add @langchain/langgraph
pnpm add @langchain/langgraph
### LangSmith SDK[](#langsmith-sdk "Direct link to LangSmith SDK")
The LangSmith SDK is automatically installed by LangChain. If you're not using it with LangChain, install with:
* npm
* Yarn
* pnpm
npm install langsmith
yarn add langsmith
pnpm add langsmith
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
Installing integration packages[](#installing-integration-packages "Direct link to Installing integration packages")
---------------------------------------------------------------------------------------------------------------------
LangChain supports packages that contain module integrations with individual third-party providers. They can be as specific as [`@langchain/anthropic`](/v0.2/docs/integrations/platforms/anthropic/), which contains integrations just for Anthropic models, or as broad as [`@langchain/community`](https://www.npmjs.com/package/@langchain/community), which contains broader variety of community contributed integrations.
These packages, as well as the main LangChain package, all depend on [`@langchain/core`](https://www.npmjs.com/package/@langchain/core), which contains the base abstractions that these integration packages extend.
To ensure that all integrations and their types interact with each other properly, it is important that they all use the same version of `@langchain/core`. The best way to guarantee this is to add a `"resolutions"` or `"overrides"` field like the following in your project's `package.json`. The name will depend on your package manager:
tip
The `resolutions` or `pnpm.overrides` fields for `yarn` or `pnpm` must be set in the root `package.json` file.
If you are using `yarn`:
yarn package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/anthropic": "^0.0.2", "langchain": "0.0.207" }, "resolutions": { "@langchain/core": "0.1.5" }}
Or for `npm`:
npm package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/anthropic": "^0.0.2", "langchain": "0.0.207" }, "overrides": { "@langchain/core": "0.1.5" }}
Or for `pnpm`:
pnpm package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/anthropic": "^0.0.2", "langchain": "0.0.207" }, "pnpm": { "overrides": { "@langchain/core": "0.1.5" } }}
Loading the library[](#loading-the-library "Direct link to Loading the library")
---------------------------------------------------------------------------------
### TypeScript[](#typescript "Direct link to TypeScript")
LangChain is written in TypeScript and provides type definitions for all of its public APIs.
### ESM[](#esm "Direct link to ESM")
LangChain provides an ESM build targeting Node.js environments. You can import it using the following syntax:
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
If you are using TypeScript in an ESM project we suggest updating your `tsconfig.json` to include the following:
tsconfig.json
{ "compilerOptions": { ... "target": "ES2020", // or higher "module": "nodenext", }}
### CommonJS[](#commonjs "Direct link to CommonJS")
LangChain provides a CommonJS build targeting Node.js environments. You can import it using the following syntax:
const { OpenAI } = require("@langchain/openai");
### Cloudflare Workers[](#cloudflare-workers "Direct link to Cloudflare Workers")
LangChain can be used in Cloudflare Workers. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
### Vercel / Next.js[](#vercel--nextjs "Direct link to Vercel / Next.js")
LangChain can be used in Vercel / Next.js. We support using LangChain in frontend components, in Serverless functions and in Edge functions. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
### Deno / Supabase Edge Functions[](#deno--supabase-edge-functions "Direct link to Deno / Supabase Edge Functions")
LangChain can be used in Deno / Supabase Edge Functions. You can import it using the following syntax:
import { OpenAI } from "https://esm.sh/@langchain/openai";
or
import { OpenAI } from "npm:@langchain/openai";
We recommend looking at our [Supabase Template](https://github.com/langchain-ai/langchain-template-supabase) for an example of how to use LangChain in Supabase Edge Functions.
### Browser[](#browser "Direct link to Browser")
LangChain can be used in the browser. In our CI we test bundling LangChain with Webpack and Vite, but other bundlers should work too. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
Unsupported: Node.js 16[](#unsupported-nodejs-16 "Direct link to Unsupported: Node.js 16")
-------------------------------------------------------------------------------------------
We do not support Node.js 16, but if you still want to run LangChain on Node.js 16, you will need to follow the instructions in this section. We do not guarantee that these instructions will continue to work in the future.
You will have to make `fetch` available globally, either:
* run your application with `NODE_OPTIONS='--experimental-fetch' node ...`, or
* install `node-fetch` and follow the instructions [here](https://github.com/node-fetch/node-fetch#providing-global-access)
You'll also need to [polyfill `ReadableStream`](https://www.npmjs.com/package/web-streams-polyfill) by installing:
* npm
* Yarn
* pnpm
npm i web-streams-polyfill
yarn add web-streams-polyfill
pnpm add web-streams-polyfill
And then adding it to the global namespace in your main entrypoint:
import "web-streams-polyfill/es6";
Additionally you'll have to polyfill `structuredClone`, eg. by installing `core-js` and following the instructions [here](https://github.com/zloirock/core-js).
If you are running Node.js 18+, you do not need to do anything.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use example selectors
](/v0.2/docs/how_to/example_selectors)[
Next
How to stream responses from an LLM
](/v0.2/docs/how_to/streaming_llm)
* [Supported Environments](#supported-environments)
* [Installation](#installation-1)
* [@langchain/community](#langchaincommunity)
* [@langchain/core](#langchaincore)
* [LangGraph](#langgraph)
* [LangSmith SDK](#langsmith-sdk)
* [Installing integration packages](#installing-integration-packages)
* [Loading the library](#loading-the-library)
* [TypeScript](#typescript)
* [ESM](#esm)
* [CommonJS](#commonjs)
* [Cloudflare Workers](#cloudflare-workers)
* [Vercel / Next.js](#vercel--nextjs)
* [Deno / Supabase Edge Functions](#deno--supabase-edge-functions)
* [Browser](#browser)
* [Unsupported: Node.js 16](#unsupported-nodejs-16)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/multiple_queries | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to generate multiple queries to retrieve data for
On this page
How to generate multiple queries to retrieve data for
=====================================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on “distance”. But retrieval may produce different results with subtle changes in query wording or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.
The [`MultiQueryRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_query.MultiQueryRetriever.html) automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the `MultiQueryRetriever` can help overcome some of the limitations of the distance-based retrieval and get a richer set of results.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic @langchain/cohere
yarn add @langchain/anthropic @langchain/cohere
pnpm add @langchain/anthropic @langchain/cohere
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { CohereEmbeddings } from "@langchain/cohere";import { MultiQueryRetriever } from "langchain/retrievers/multi_query";import { ChatAnthropic } from "@langchain/anthropic";const embeddings = new CohereEmbeddings();const vectorstore = await MemoryVectorStore.fromTexts( [ "Buildings are made out of brick", "Buildings are made out of wood", "Buildings are made out of stone", "Cars are made out of metal", "Cars are made out of plastic", "mitochondria is the powerhouse of the cell", "mitochondria is made of lipids", ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], embeddings);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const retriever = MultiQueryRetriever.fromLLM({ llm: model, retriever: vectorstore.asRetriever(),});const query = "What are mitochondria made of?";const retrievedDocs = await retriever.invoke(query);/* Generated queries: What are the components of mitochondria?,What substances comprise the mitochondria organelle? ,What is the molecular composition of mitochondria?*/console.log(retrievedDocs);
[ Document { pageContent: "mitochondria is made of lipids", metadata: {} }, Document { pageContent: "mitochondria is the powerhouse of the cell", metadata: {} }, Document { pageContent: "Buildings are made out of brick", metadata: { id: 1 } }, Document { pageContent: "Buildings are made out of wood", metadata: { id: 2 } }]
Customization[](#customization "Direct link to Customization")
---------------------------------------------------------------
You can also supply a custom prompt to tune what types of questions are generated. You can also pass a custom output parser to parse and split the results of the LLM call into a list of queries.
import { LLMChain } from "langchain/chains";import { pull } from "langchain/hub";import { BaseOutputParser } from "@langchain/core/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";type LineList = { lines: string[];};class LineListOutputParser extends BaseOutputParser<LineList> { static lc_name() { return "LineListOutputParser"; } lc_namespace = ["langchain", "retrievers", "multiquery"]; async parse(text: string): Promise<LineList> { const startKeyIndex = text.indexOf("<questions>"); const endKeyIndex = text.indexOf("</questions>"); const questionsStartIndex = startKeyIndex === -1 ? 0 : startKeyIndex + "<questions>".length; const questionsEndIndex = endKeyIndex === -1 ? text.length : endKeyIndex; const lines = text .slice(questionsStartIndex, questionsEndIndex) .trim() .split("\n") .filter((line) => line.trim() !== ""); return { lines }; } getFormatInstructions(): string { throw new Error("Not implemented."); }}// Default prompt is available at: https://smith.langchain.com/hub/jacob/multi-vector-retriever-germanconst prompt: PromptTemplate = await pull( "jacob/multi-vector-retriever-german");const vectorstore = await MemoryVectorStore.fromTexts( [ "Gebäude werden aus Ziegelsteinen hergestellt", "Gebäude werden aus Holz hergestellt", "Gebäude werden aus Stein hergestellt", "Autos werden aus Metall hergestellt", "Autos werden aus Kunststoff hergestellt", "Mitochondrien sind die Energiekraftwerke der Zelle", "Mitochondrien bestehen aus Lipiden", ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], embeddings);const model = new ChatAnthropic({});const llmChain = new LLMChain({ llm: model, prompt, outputParser: new LineListOutputParser(),});const retriever = new MultiQueryRetriever({ retriever: vectorstore.asRetriever(), llmChain,});const query = "What are mitochondria made of?";const retrievedDocs = await retriever.invoke(query);/* Generated queries: Was besteht ein Mitochondrium?,Aus welchen Komponenten setzt sich ein Mitochondrium zusammen? ,Welche Moleküle finden sich in einem Mitochondrium?*/console.log(retrievedDocs);
[ Document { pageContent: "Mitochondrien bestehen aus Lipiden", metadata: {} }, Document { pageContent: "Mitochondrien sind die Energiekraftwerke der Zelle", metadata: {} }, Document { pageContent: "Gebäude werden aus Stein hergestellt", metadata: { id: 3 } }, Document { pageContent: "Autos werden aus Metall hergestellt", metadata: { id: 4 } }, Document { pageContent: "Gebäude werden aus Holz hergestellt", metadata: { id: 2 } }, Document { pageContent: "Gebäude werden aus Ziegelsteinen hergestellt", metadata: { id: 1 } }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to use the `MultiQueryRetriever` to query a vector store with automatically generated queries.
See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to generate multiple embeddings per document
](/v0.2/docs/how_to/multi_vector)[
Next
How to parse JSON output
](/v0.2/docs/how_to/output_parser_json)
* [Get started](#get-started)
* [Customization](#customization)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/output_parser_json | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to parse JSON output
On this page
How to parse JSON output
========================
While some model providers support [built-in ways to return structured output](/v0.2/docs/how_to/structured_output), not all do. We can use an output parser to help users to specify an arbitrary JSON schema via the prompt, query a model for outputs that conform to that schema, and finally parse that schema as JSON.
note
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed JSON.
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [Output parsers](/v0.2/docs/concepts/#output-parsers)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Structured output](/v0.2/docs/how_to/structured_output)
* [Chaining runnables together](/v0.2/docs/how_to/sequence/)
The [`JsonOutputParser`](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html) is one built-in option for prompting for and then parsing JSON output.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { JsonOutputParser } from "@langchain/core/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";// Define your desired data structure.interface Joke { setup: string; punchline: string;}// And a query intented to prompt a language model to populate the data structure.const jokeQuery = "Tell me a joke.";const formatInstructions = "Respond with a valid JSON object, containing two fields: 'setup' and 'punchline'.";// Set up a parser + inject instructions into the prompt template.const parser = new JsonOutputParser<Joke>();const prompt = new PromptTemplate({ template: "Answer the user query.\n{format_instructions}\n{query}\n", inputVariables: ["query"], partialVariables: { format_instructions: formatInstructions },});const chain = prompt.pipe(model).pipe(parser);await chain.invoke({ query: jokeQuery });
{ setup: "Why couldn't the bicycle stand up by itself?", punchline: "Because it was two tired!"}
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
The `JsonOutputParser` also supports streaming partial chunks. This is useful when the model returns partial JSON output in multiple chunks. The parser will keep track of the partial chunks and return the final JSON output when the model finishes generating the output.
for await (const s of await chain.stream({ query: jokeQuery })) { console.log(s);}
{}{ setup: "" }{ setup: "Why" }{ setup: "Why couldn" }{ setup: "Why couldn't" }{ setup: "Why couldn't the" }{ setup: "Why couldn't the bicycle" }{ setup: "Why couldn't the bicycle stand" }{ setup: "Why couldn't the bicycle stand up" }{ setup: "Why couldn't the bicycle stand up by" }{ setup: "Why couldn't the bicycle stand up by itself" }{ setup: "Why couldn't the bicycle stand up by itself?", punchline: ""}{ setup: "Why couldn't the bicycle stand up by itself?", punchline: "It"}{ setup: "Why couldn't the bicycle stand up by itself?", punchline: "It was"}{ setup: "Why couldn't the bicycle stand up by itself?", punchline: "It was two"}{ setup: "Why couldn't the bicycle stand up by itself?", punchline: "It was two tired"}{ setup: "Why couldn't the bicycle stand up by itself?", punchline: "It was two tired."}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned one way to prompt a model to return structured JSON. Next, check out the [broader guide on obtaining structured output](/v0.2/docs/how_to/structured_output) for other techniques.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to generate multiple queries to retrieve data for
](/v0.2/docs/how_to/multiple_queries)[
Next
How to retry when output parsing errors occur
](/v0.2/docs/how_to/output_parser_retry)
* [Streaming](#streaming)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/output_parser_retry | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to retry when output parsing errors occur
How to retry when output parsing errors occur
=============================================
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { z } from "zod";import { ChatOpenAI } from "@langchain/openai";import { OutputFixingParser } from "langchain/output_parsers";import { StructuredOutputParser } from "@langchain/core/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites."), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OutputFixingParser](https://v02.api.js.langchain.com/classes/langchain_output_parsers.OutputFixingParser.html) from `langchain/output_parsers`
* [StructuredOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html) from `@langchain/core/output_parsers`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to parse JSON output
](/v0.2/docs/how_to/output_parser_json)[
Next
How to parse XML output
](/v0.2/docs/how_to/output_parser_xml)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/output_parser_xml | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to parse XML output
On this page
How to parse XML output
=======================
The `XMLOutputParser` takes language model output which contains XML and parses it into a JSON object.
The output parser also supports streaming outputs.
Currently, the XML parser does not contain support for self closing tags, or attributes on tags.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
import { XMLOutputParser } from "@langchain/core/output_parsers";const XML_EXAMPLE = `<?xml version="1.0" encoding="UTF-8"?><userProfile> <userID>12345</userID> <name>John Doe</name> <email>john.doe@example.com</email> <roles> <role>Admin</role> <role>User</role> </roles> <preferences> <theme>Dark</theme> <notifications> <email>true</email> <sms>false</sms> </notifications> </preferences></userProfile>`;const parser = new XMLOutputParser();const result = await parser.invoke(XML_EXAMPLE);console.log(JSON.stringify(result, null, 2));/*{ "userProfile": [ { "userID": "12345" }, { "name": "John Doe" }, { "email": "john.doe@example.com" }, { "roles": [ { "role": "Admin" }, { "role": "User" } ] }, { "preferences": [ { "theme": "Dark" }, { "notifications": [ { "email": "true" }, { "sms": "false" } ] } ] } ]}*/
#### API Reference:
* [XMLOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html) from `@langchain/core/output_parsers`
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
import { XMLOutputParser } from "@langchain/core/output_parsers";import { FakeStreamingLLM } from "@langchain/core/utils/testing";const XML_EXAMPLE = `<?xml version="1.0" encoding="UTF-8"?><userProfile> <userID>12345</userID> <roles> <role>Admin</role> <role>User</role> </roles></userProfile>`;const parser = new XMLOutputParser();// Define your LLM, in this example we'll use demo streaming LLMconst streamingLLM = new FakeStreamingLLM({ responses: [XML_EXAMPLE],}).pipe(parser); // Pipe the parser to the LLMconst stream = await streamingLLM.stream(XML_EXAMPLE);for await (const chunk of stream) { console.log(JSON.stringify(chunk, null, 2));}/*{}{ "userProfile": ""}{ "userProfile": "\n"}{ "userProfile": [ { "userID": "" } ]}{ "userProfile": [ { "userID": "123" } ]}{ "userProfile": [ { "userID": "12345" }, { "roles": "" } ]}{ "userProfile": [ { "userID": "12345" }, { "roles": [ { "role": "A" } ] } ]}{ "userProfile": [ { "userID": "12345" }, { "roles": [ { "role": "Admi" } ] } ]}{ "userProfile": [ { "userID": "12345" }, { "roles": [ { "role": "Admin" }, { "role": "U" } ] } ]}{ "userProfile": [ { "userID": "12345" }, { "roles": [ { "role": "Admin" }, { "role": "User" } ] } ]}*/
#### API Reference:
* [XMLOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html) from `@langchain/core/output_parsers`
* [FakeStreamingLLM](https://v02.api.js.langchain.com/classes/langchain_core_utils_testing.FakeStreamingLLM.html) from `@langchain/core/utils/testing`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to retry when output parsing errors occur
](/v0.2/docs/how_to/output_parser_retry)[
Next
How to invoke runnables in parallel
](/v0.2/docs/how_to/parallel)
* [Usage](#usage)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/parallel | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to invoke runnables in parallel
On this page
How to invoke runnables in parallel
===================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
The [`RunnableParallel`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableParallel.html) (also known as a `RunnableMap`) primitive is an object whose values are runnables (or things that can be coerced to runnables, like functions). It runs all of its values in parallel, and each value is called with the initial input to the `RunnableParallel`. The final return value is an object with the results of each value under its appropriate key.
Formatting with `RunnableParallels`[](#formatting-with-runnableparallels "Direct link to formatting-with-runnableparallels")
-----------------------------------------------------------------------------------------------------------------------------
`RunnableParallels` are useful for parallelizing operations, but can also be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence. You can use them to split or fork the chain so that multiple components can process the input in parallel. Later, other components can join or merge the results to synthesize a final response. This type of chain creates a computation graph that looks like the following:
Input / \ / \ Branch1 Branch2 \ / \ / Combine
Below, the input to each chain in the `RunnableParallel` is expected to be an object with a key for `"topic"`. We can satisfy that requirement by invoking our chain with an object matching that structure.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic @langchain/cohere
yarn add @langchain/anthropic @langchain/cohere
pnpm add @langchain/anthropic @langchain/cohere
import { PromptTemplate } from "@langchain/core/prompts";import { RunnableMap } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({});const jokeChain = PromptTemplate.fromTemplate( "Tell me a joke about {topic}").pipe(model);const poemChain = PromptTemplate.fromTemplate( "write a 2-line poem about {topic}").pipe(model);const mapChain = RunnableMap.from({ joke: jokeChain, poem: poemChain,});const result = await mapChain.invoke({ topic: "bear" });console.log(result);/* { joke: AIMessage { content: " Here's a silly joke about a bear:\n" + '\n' + 'What do you call a bear with no teeth?\n' + 'A gummy bear!', additional_kwargs: {} }, poem: AIMessage { content: ' Here is a 2-line poem about a bear:\n' + '\n' + 'Furry and wild, the bear roams free \n' + 'Foraging the forest, strong as can be', additional_kwargs: {} } }*/
#### API Reference:
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableMap](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableMap.html) from `@langchain/core/runnables`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Manipulating outputs/inputs[](#manipulating-outputsinputs "Direct link to Manipulating outputs/inputs")
--------------------------------------------------------------------------------------------------------
Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.
Note below that the object within the `RunnableSequence.from()` call is automatically coerced into a runnable map. All keys of the object must have values that are runnables or can be themselves coerced to runnables (functions to `RunnableLambda`s or objects to `RunnableMap`s). This coercion will also occur when composing chains via the `.pipe()` method.
import { CohereEmbeddings } from "@langchain/cohere";import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { Document } from "@langchain/core/documents";import { ChatAnthropic } from "@langchain/anthropic";import { MemoryVectorStore } from "langchain/vectorstores/memory";const model = new ChatAnthropic();const vectorstore = await MemoryVectorStore.fromDocuments( [{ pageContent: "mitochondria is the powerhouse of the cell", metadata: {} }], new CohereEmbeddings());const retriever = vectorstore.asRetriever();const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = PromptTemplate.fromTemplate(template);const formatDocs = (docs: Document[]) => docs.map((doc) => doc.pageContent);const retrievalChain = RunnableSequence.from([ { context: retriever.pipe(formatDocs), question: new RunnablePassthrough() }, prompt, model, new StringOutputParser(),]);const result = await retrievalChain.invoke( "what is the powerhouse of the cell?");console.log(result);/* Based on the given context, the powerhouse of the cell is mitochondria.*/
#### API Reference:
* [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
Here the input to prompt is expected to be a map with keys "context" and "question". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the "question" key.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You now know some ways to format and parallelize chain steps with `RunnableParallel`.
Next, you might be interested in [using custom logic](/v0.2/docs/how_to/functions/) in your chains.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to parse XML output
](/v0.2/docs/how_to/output_parser_xml)[
Next
How to retrieve the whole document for a chunk
](/v0.2/docs/how_to/parent_document_retriever)
* [Formatting with `RunnableParallels`](#formatting-with-runnableparallels)
* [Manipulating outputs/inputs](#manipulating-outputsinputs)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/parent_document_retriever | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to retrieve the whole document for a chunk
On this page
How to retrieve the whole document for a chunk
==============================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Text splitters](/v0.2/docs/concepts/#text-splitters)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
When splitting documents for retrieval, there are often conflicting desires:
1. You may want to have small documents, so that their embeddings can most accurately reflect their meaning. If documents are too long, then the embeddings can lose meaning.
2. You want to have long enough documents that the context of each chunk is retained.
The [`ParentDocumentRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) strikes that balance by splitting and storing small chunks of data. During retrieval, it first fetches the small chunks but then looks up the parent ids for those chunks and returns those larger documents.
Note that "parent document" refers to the document that a small chunk originated from. This can either be the whole raw document OR a larger chunk.
This is a more specific form of [generating multiple embeddings per document](/v0.2/docs/how_to/multi_vector).
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever } from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { TextLoader } from "langchain/document_loaders/fs/text";const vectorstore = new MemoryVectorStore(new OpenAIEmbeddings());const docstore = new InMemoryStore();const retriever = new ParentDocumentRetriever({ vectorstore, docstore, // Optional, not required if you're already passing in split documents parentSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 500, }), childSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 50, }), // Optional `k` parameter to search for more child documents in VectorStore. // Note that this does not exactly correspond to the number of final (parent) documents // retrieved, as multiple child documents can point to the same parent. childK: 20, // Optional `k` parameter to limit number of final, parent documents returned from this // retriever and sent to LLM. This is an upper-bound, and the final count may be lower than this. parentK: 5,});const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();// We must add the parent documents via the retriever's addDocuments methodawait retriever.addDocuments(parentDocuments);const retrievedDocs = await retriever.invoke("justice breyer");// Retrieved chunks are the larger parent chunksconsole.log(retrievedDocs);/* [ Document { pageContent: 'Tonight, I call on the Senate to pass — pass the Freedom to Vote Act. Pass the John Lewis Act — Voting Rights Act. And while you’re at it, pass the DISCLOSE Act so Americans know who is funding our elections.\n' + '\n' + 'Look, tonight, I’d — I’d like to honor someone who has dedicated his life to serve this country: Justice Breyer — an Army veteran, Constitutional scholar, retiring Justice of the United States Supreme Court.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } }, Document { pageContent: 'As I did four days ago, I’ve nominated a Circuit Court of Appeals — Ketanji Brown Jackson. One of our nation’s top legal minds who will continue in just Brey- — Justice Breyer’s legacy of excellence. A former top litigator in private practice, a former federal public defender from a family of public-school educators and police officers — she’s a consensus builder.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } }, Document { pageContent: 'Justice Breyer, thank you for your service. Thank you, thank you, thank you. I mean it. Get up. Stand — let me see you. Thank you.\n' + '\n' + 'And we all know — no matter what your ideology, we all know one of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } } ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
With Score Threshold[](#with-score-threshold "Direct link to With Score Threshold")
------------------------------------------------------------------------------------
By setting the options in `scoreThresholdOptions` we can force the `ParentDocumentRetriever` to use the `ScoreThresholdRetriever` under the hood. This sets the vector store inside `ScoreThresholdRetriever` as the one we passed when initializing `ParentDocumentRetriever`, while also allowing us to also set a score threshold for the retriever.
This can be helpful when you're not sure how many documents you want (or if you are sure, just set the `maxK` option), but you want to make sure that the documents you do get are within a certain relevancy threshold.
Note: if a retriever is passed, `ParentDocumentRetriever` will default to use it for retrieving small chunks, as well as adding documents via the `addDocuments` method.
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever } from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { TextLoader } from "langchain/document_loaders/fs/text";import { ScoreThresholdRetriever } from "langchain/retrievers/score_threshold";const vectorstore = new MemoryVectorStore(new OpenAIEmbeddings());const docstore = new InMemoryStore();const childDocumentRetriever = ScoreThresholdRetriever.fromVectorStore( vectorstore, { minSimilarityScore: 0.01, // Essentially no threshold maxK: 1, // Only return the top result });const retriever = new ParentDocumentRetriever({ vectorstore, docstore, childDocumentRetriever, // Optional, not required if you're already passing in split documents parentSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 500, }), childSplitter: new RecursiveCharacterTextSplitter({ chunkOverlap: 0, chunkSize: 50, }),});const textLoader = new TextLoader("../examples/state_of_the_union.txt");const parentDocuments = await textLoader.load();// We must add the parent documents via the retriever's addDocuments methodawait retriever.addDocuments(parentDocuments);const retrievedDocs = await retriever.invoke("justice breyer");// Retrieved chunk is the larger parent chunkconsole.log(retrievedDocs);/* [ Document { pageContent: 'Tonight, I call on the Senate to pass — pass the Freedom to Vote Act. Pass the John Lewis Act — Voting Rights Act. And while you’re at it, pass the DISCLOSE Act so Americans know who is funding our elections.\n' + '\n' + 'Look, tonight, I’d — I’d like to honor someone who has dedicated his life to serve this country: Justice Breyer — an Army veteran, Constitutional scholar, retiring Justice of the United States Supreme Court.', metadata: { source: '../examples/state_of_the_union.txt', loc: [Object] } }, ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [ScoreThresholdRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_score_threshold.ScoreThresholdRetriever.html) from `langchain/retrievers/score_threshold`
With Contextual chunk headers[](#with-contextual-chunk-headers "Direct link to With Contextual chunk headers")
---------------------------------------------------------------------------------------------------------------
Consider a scenario where you want to store collection of documents in a vector store and perform Q&A tasks on them. Simply splitting documents with overlapping text may not provide sufficient context for LLMs to determine if multiple chunks are referencing the same information, or how to resolve information from contradictory sources.
Tagging each document with metadata is a solution if you know what to filter against, but you may not know ahead of time exactly what kind of queries your vector store will be expected to handle. Including additional contextual information directly in each chunk in the form of headers can help deal with arbitrary queries.
This is particularly important if you have several fine-grained child chunks that need to be correctly retrieved from the vector store.
import { OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever } from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1500, chunkOverlap: 0,});const jimDocs = await splitter.createDocuments([`My favorite color is blue.`]);const jimChunkHeaderOptions = { chunkHeader: "DOC NAME: Jim Interview\n---\n", appendChunkOverlapHeader: true,};const pamDocs = await splitter.createDocuments([`My favorite color is red.`]);const pamChunkHeaderOptions = { chunkHeader: "DOC NAME: Pam Interview\n---\n", appendChunkOverlapHeader: true,};const vectorstore = await HNSWLib.fromDocuments([], new OpenAIEmbeddings());const docstore = new InMemoryStore();const retriever = new ParentDocumentRetriever({ vectorstore, docstore, // Very small chunks for demo purposes. // Use a bigger chunk size for serious use-cases. childSplitter: new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 0, }), childK: 50, parentK: 5,});// We pass additional option `childDocChunkHeaderOptions`// that will add the chunk header to child documentsawait retriever.addDocuments(jimDocs, { childDocChunkHeaderOptions: jimChunkHeaderOptions,});await retriever.addDocuments(pamDocs, { childDocChunkHeaderOptions: pamChunkHeaderOptions,});// This will search child documents in vector store with the help of chunk header,// returning the unmodified parent documentsconst retrievedDocs = await retriever.invoke("What is Pam's favorite color?");// Pam's favorite color is returned first!console.log(JSON.stringify(retrievedDocs, null, 2));/* [ { "pageContent": "My favorite color is red.", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } } } }, { "pageContent": "My favorite color is blue.", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } } } } ]*/const rawDocs = await vectorstore.similaritySearch( "What is Pam's favorite color?");// Raw docs in vectorstore are short but have chunk headersconsole.log(JSON.stringify(rawDocs, null, 2));/* [ { "pageContent": "DOC NAME: Pam Interview\n---\n(cont'd) color is", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } }, { "pageContent": "DOC NAME: Pam Interview\n---\n(cont'd) favorite", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } }, { "pageContent": "DOC NAME: Pam Interview\n---\n(cont'd) red.", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } }, { "pageContent": "DOC NAME: Pam Interview\n---\nMy", "metadata": { "loc": { "lines": { "from": 1, "to": 1 } }, "doc_id": "affdcbeb-6bfb-42e9-afe5-80f4f2e9f6aa" } } ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
With Reranking[](#with-reranking "Direct link to With Reranking")
------------------------------------------------------------------
With many documents from the vector store that are passed to LLM, final answers sometimes consist of information from irrelevant chunks, making it less precise and sometimes incorrect. Also, passing multiple irrelevant documents makes it more expensive. So there are two reasons to use rerank - precision and costs.
import { OpenAIEmbeddings } from "@langchain/openai";import { CohereRerank } from "@langchain/cohere";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { InMemoryStore } from "@langchain/core/stores";import { ParentDocumentRetriever, type SubDocs,} from "langchain/retrievers/parent_document";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";// init Cohere Rerank. Remember to add COHERE_API_KEY to your .envconst reranker = new CohereRerank({ topN: 50, model: "rerank-multilingual-v2.0",});export function documentCompressorFiltering({ relevanceScore,}: { relevanceScore?: number } = {}) { return (docs: SubDocs) => { let outputDocs = docs; if (relevanceScore) { const docsRelevanceScoreValues = docs.map( (doc) => doc?.metadata?.relevanceScore ); outputDocs = docs.filter( (_doc, index) => (docsRelevanceScoreValues?.[index] || 1) >= relevanceScore ); } return outputDocs; };}const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const jimDocs = await splitter.createDocuments([`Jim favorite color is blue.`]);const pamDocs = await splitter.createDocuments([`Pam favorite color is red.`]);const vectorstore = await HNSWLib.fromDocuments([], new OpenAIEmbeddings());const docstore = new InMemoryStore();const retriever = new ParentDocumentRetriever({ vectorstore, docstore, // Very small chunks for demo purposes. // Use a bigger chunk size for serious use-cases. childSplitter: new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 0, }), childK: 50, parentK: 5, // We add Reranker documentCompressor: reranker, documentCompressorFilteringFn: documentCompressorFiltering({ relevanceScore: 0.3, }),});const docs = jimDocs.concat(pamDocs);await retriever.addDocuments(docs);// This will search for documents in vector store and return for LLM already reranked and sorted document// with appropriate minimum relevance scoreconst retrievedDocs = await retriever.getRelevantDocuments( "What is Pam's favorite color?");// Pam's favorite color is returned first!console.log(JSON.stringify(retrievedDocs, null, 2));/* [ { "pageContent": "My favorite color is red.", "metadata": { "relevanceScore": 0.9 "loc": { "lines": { "from": 1, "to": 1 } } } } ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [CohereRerank](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereRerank.html) from `@langchain/cohere`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [InMemoryStore](https://v02.api.js.langchain.com/classes/langchain_core_stores.InMemoryStore.html) from `@langchain/core/stores`
* [ParentDocumentRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html) from `langchain/retrievers/parent_document`
* [SubDocs](https://v02.api.js.langchain.com/types/langchain_retrievers_parent_document.SubDocs.html) from `langchain/retrievers/parent_document`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to use the `ParentDocumentRetriever`.
Next, check out the more general form of [generating multiple embeddings per document](/v0.2/docs/how_to/multi_vector), the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to invoke runnables in parallel
](/v0.2/docs/how_to/parallel)[
Next
How to partially format prompt templates
](/v0.2/docs/how_to/prompts_partial)
* [Usage](#usage)
* [With Score Threshold](#with-score-threshold)
* [With Contextual chunk headers](#with-contextual-chunk-headers)
* [With Reranking](#with-reranking)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/prompts_partial | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to partially format prompt templates
On this page
How to partially format prompt templates
========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
Like partially binding arguments to a function, it can make sense to "partial" a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways:
1. Partial formatting with string values.
2. Partial formatting with functions that return string values.
In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
Partial with strings[](#partial-with-strings "Direct link to Partial with strings")
------------------------------------------------------------------------------------
One common use case for wanting to partial a prompt template is if you get access to some of the variables in a prompt before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in your chain, but the `baz` value later, it can be inconvenient to pass both variables all the way through the chain. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
import { PromptTemplate } from "langchain/prompts";const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["foo", "bar"],});const partialPrompt = await prompt.partial({ foo: "foo",});const formattedPrompt = await partialPrompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz
You can also just initialize the prompt with the partialed variables.
const prompt = new PromptTemplate({ template: "{foo}{bar}", inputVariables: ["bar"], partialVariables: { foo: "foo", },});const formattedPrompt = await prompt.format({ bar: "baz",});console.log(formattedPrompt);// foobaz
Partial With Functions[](#partial-with-functions "Direct link to Partial With Functions")
------------------------------------------------------------------------------------------
You can also partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can't hard code it in the prompt, and passing it along with the other input variables can be tedious. In this case, it's very handy to be able to partial the prompt with a function that always returns the current date.
const getCurrentDate = () => { return new Date().toISOString();};const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective", "date"],});const partialPrompt = await prompt.partial({ date: getCurrentDate,});const formattedPrompt = await partialPrompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z
You can also just initialize the prompt with the partialed variables:
const prompt = new PromptTemplate({ template: "Tell me a {adjective} joke about the day {date}", inputVariables: ["adjective"], partialVariables: { date: getCurrentDate, },});const formattedPrompt = await prompt.format({ adjective: "funny",});console.log(formattedPrompt);// Tell me a funny joke about the day 2023-07-13T00:54:59.287Z
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to partially apply variables to your prompt templates.
Next, check out the other how-to guides on prompt templates in this section, like [adding few-shot examples to your prompt templates](/v0.2/docs/how_to/few_shot_examples_chat).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to retrieve the whole document for a chunk
](/v0.2/docs/how_to/parent_document_retriever)[
Next
How to add chat history to a question-answering chain
](/v0.2/docs/how_to/qa_chat_history_how_to)
* [Partial with strings](#partial-with-strings)
* [Partial With Functions](#partial-with-functions)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add chat history to a question-answering chain
On this page
How to add chat history to a question-answering chain
=====================================================
Prerequisites
This guide assumes familiarity with the following:
* [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/)
In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of “memory” of past questions and answers, and some logic for incorporating those into its current thinking.
In this guide we focus on **adding logic for incorporating historical messages, and NOT on chat history management.** Chat history management is [covered here](/v0.2/docs/how_to/message_history).
We’ll work off of the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng. We’ll need to update two things about our existing app:
1. **Prompt**: Update our prompt to support historical messages as an input.
2. **Contextualizing questions**: Add a sub-chain that takes the latest user question and reformulates it in the context of the chat history. This is needed in case the latest question references some context from past messages. For example, if a user asks a follow-up question like “Can you elaborate on the second point?”, this cannot be understood without the context of the previous message. Therefore we can’t effectively perform retrieval with a question like this.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers).
We’ll use the following packages:
npm install --save langchain @langchain/openai cheerio
We need to set environment variable `OPENAI_API_KEY`:
export OPENAI_API_KEY=YOUR_KEY
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](/v0.2/docs/langsmith/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
### Initial setup[](#initial-setup "Direct link to Initial setup")
import "cheerio";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();// Tip - you can edit this!const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChain = await createStuffDocumentsChain({ llm, prompt, outputParser: new StringOutputParser(),});
Let’s see what this prompt actually looks like
console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer:
await ragChain.invoke({ context: await retriever.invoke("What is Task Decomposition?"), question: "What is Task Decomposition?",});
"Task Decomposition involves breaking down complex tasks into smaller and simpler steps to make them "... 243 more characters
Contextualizing the question[](#contextualizing-the-question "Direct link to Contextualizing the question")
------------------------------------------------------------------------------------------------------------
First we’ll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.
We’ll use a prompt that includes a `MessagesPlaceholder` variable under the name “chat\_history”. This allows us to pass in a list of Messages to the prompt using the “chat\_history” input key, and these messages will be inserted after the system message and before the human message containing the latest question.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const contextualizeQSystemPrompt = `Given a chat history and the latest user questionwhich might reference context in the chat history, formulate a standalone questionwhich can be understood without the chat history. Do NOT answer the question,just reformulate it if needed and otherwise return it as is.`;const contextualizeQPrompt = ChatPromptTemplate.fromMessages([ ["system", contextualizeQSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);const contextualizeQChain = contextualizeQPrompt .pipe(llm) .pipe(new StringOutputParser());
Using this chain we can ask follow-up questions that reference past messages and have them reformulated into standalone questions:
import { AIMessage, HumanMessage } from "@langchain/core/messages";await contextualizeQChain.invoke({ chat_history: [ new HumanMessage("What does LLM stand for?"), new AIMessage("Large language model"), ], question: "What is meant by large",});
'What is the definition of "large" in this context?'
Chain with chat history[](#chain-with-chat-history "Direct link to Chain with chat history")
---------------------------------------------------------------------------------------------
And now we can build our full QA chain.
Notice we add some routing functionality to only run the “condense question chain” when our chat history isn’t empty. Here we’re taking advantage of the fact that if a function in an LCEL chain returns another chain, that chain will itself be invoked.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const qaSystemPrompt = `You are an assistant for question-answering tasks.Use the following pieces of retrieved context to answer the question.If you don't know the answer, just say that you don't know.Use three sentences maximum and keep the answer concise.{context}`;const qaPrompt = ChatPromptTemplate.fromMessages([ ["system", qaSystemPrompt], new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);const contextualizedQuestion = (input: Record<string, unknown>) => { if ("chat_history" in input) { return contextualizeQChain; } return input.question;};const ragChain = RunnableSequence.from([ RunnablePassthrough.assign({ context: async (input: Record<string, unknown>) => { if ("chat_history" in input) { const chain = contextualizedQuestion(input); return chain.pipe(retriever).pipe(formatDocumentsAsString); } return ""; }, }), qaPrompt, llm,]);const chat_history = [];const question = "What is task decomposition?";const aiMsg = await ragChain.invoke({ question, chat_history });console.log(aiMsg);chat_history.push(aiMsg);const secondQuestion = "What are common ways of doing it?";await ragChain.invoke({ question: secondQuestion, chat_history });
AIMessage { lc_serializable: true, lc_kwargs: { content: "Task decomposition involves breaking down a complex task into smaller and simpler steps to make it m"... 358 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Task decomposition involves breaking down a complex task into smaller and simpler steps to make it m"... 358 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 83, promptTokens: 701, totalTokens: 784 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
AIMessage { lc_serializable: true, lc_kwargs: { content: "Common ways of task decomposition include using simple prompting techniques like Chain of Thought (C"... 353 more characters, tool_calls: [], invalid_tool_calls: [], additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Common ways of task decomposition include using simple prompting techniques like Chain of Thought (C"... 353 more characters, name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }, response_metadata: { tokenUsage: { completionTokens: 81, promptTokens: 779, totalTokens: 860 }, finish_reason: "stop" }, tool_calls: [], invalid_tool_calls: []}
See the first [LangSmith trace here](https://smith.langchain.com/public/527981c6-5018-4b68-a11a-ebcde77843e7/r) and the [second trace here](https://smith.langchain.com/public/7b97994a-ab9f-4bf3-a2e4-abb609e5610a/r)
Here we’ve gone over how to add application logic for incorporating historical outputs, but we’re still manually updating the chat history and inserting it into each input. In a real Q&A application we’ll want some way of persisting chat history and some way of automatically inserting and updating it.
For this we can use:
* [BaseChatMessageHistory](https://v02.api.js.langchain.com/classes/langchain_core_chat_history.BaseChatMessageHistory.html): Store chat history.
* [RunnableWithMessageHistory](/v0.2/docs/how_to/message_history/): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.
For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/v0.2/docs/how_to/message_history/) LCEL page.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to partially format prompt templates
](/v0.2/docs/how_to/prompts_partial)[
Next
How to return citations
](/v0.2/docs/how_to/qa_citations)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Initial setup](#initial-setup)
* [Contextualizing the question](#contextualizing-the-question)
* [Chain with chat history](#chain-with-chat-history)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/qa_streaming | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to stream from a question-answering chain
On this page
How to stream from a question-answering chain
=============================================
Prerequisites
This guide assumes familiarity with the following:
* [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/)
Often in Q&A applications it’s important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation.
We’ll be using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng for retrieval content this notebook.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers).
We’ll use the following packages:
npm install --save langchain @langchain/openai cheerio
We need to set environment variable `OPENAI_API_KEY`:
export OPENAI_API_KEY=YOUR_KEY
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
Chain with sources[](#chain-with-sources "Direct link to Chain with sources")
------------------------------------------------------------------------------
Here is Q&A app with sources we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Returning sources](/v0.2/docs/how_to/qa_sources/) guide:
import "cheerio";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { formatDocumentsAsString } from "langchain/util/document";import { RunnableSequence, RunnablePassthrough, RunnableMap,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChainFromDocs = RunnableSequence.from([ RunnablePassthrough.assign({ context: (input) => formatDocumentsAsString(input.context), }), prompt, llm, new StringOutputParser(),]);let ragChainWithSource = new RunnableMap({ steps: { context: retriever, question: new RunnablePassthrough() },});ragChainWithSource = ragChainWithSource.assign({ answer: ragChainFromDocs });await ragChainWithSource.invoke("What is Task Decomposition");
{ question: "What is Task Decomposition", context: [ Document { pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" + "Component One: Planning#\n" + "A complicated ta"... 898 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Agent System Overview\n" + " \n" + " Component One: Planning\n" + " "... 850 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Resources:\n" + "1. Internet access for searches and information gathering.\n" + "2. Long Term memory management"... 456 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } } ], answer: "Task decomposition is a technique used to break down complex tasks into smaller and simpler steps fo"... 230 more characters}
Let’s see what this prompt actually looks like. You can also view it [in the LangChain prompt hub](https://smith.langchain.com/hub/rlm/rag-prompt):
console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer:
Streaming final outputs[](#streaming-final-outputs "Direct link to Streaming final outputs")
---------------------------------------------------------------------------------------------
With [LCEL](/v0.2/docs/concepts#langchain-expression-language), we can stream outputs as they are generated:
for await (const chunk of await ragChainWithSource.stream( "What is task decomposition?")) { console.log(chunk);}
{ question: "What is task decomposition?" }{ context: [ Document { pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" + "Component One: Planning#\n" + "A complicated ta"... 898 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Agent System Overview\n" + " \n" + " Component One: Planning\n" + " "... 850 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "(3) Task execution: Expert models execute on the specific tasks and log results.\n" + "Instruction:\n" + "\n" + "With "... 539 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } } ]}{ answer: "" }{ answer: "Task" }{ answer: " decomposition" }{ answer: " is" }{ answer: " a" }{ answer: " technique" }{ answer: " used" }{ answer: " to" }{ answer: " break" }{ answer: " down" }{ answer: " complex" }{ answer: " tasks" }{ answer: " into" }{ answer: " smaller" }{ answer: " and" }{ answer: " simpler" }{ answer: " steps" }{ answer: "." }{ answer: " It" }{ answer: " can" }{ answer: " be" }{ answer: " done" }{ answer: " through" }{ answer: " various" }{ answer: " methods" }{ answer: " such" }{ answer: " as" }{ answer: " using" }{ answer: " prompting" }{ answer: " techniques" }{ answer: "," }{ answer: " task" }{ answer: "-specific" }{ answer: " instructions" }{ answer: "," }{ answer: " or" }{ answer: " human" }{ answer: " inputs" }{ answer: "." }{ answer: " Another" }{ answer: " approach" }{ answer: " involves" }{ answer: " outsourcing" }{ answer: " the" }{ answer: " planning" }{ answer: " step" }{ answer: " to" }{ answer: " an" }{ answer: " external" }{ answer: " classical" }{ answer: " planner" }{ answer: "." }{ answer: "" }
We can add some logic to compile our stream as it’s being returned:
const output = {};let currentKey: string | null = null;for await (const chunk of await ragChainWithSource.stream( "What is task decomposition?")) { for (const key of Object.keys(chunk)) { if (output[key] === undefined) { output[key] = chunk[key]; } else { output[key] += chunk[key]; } if (key !== currentKey) { console.log(`\n\n${key}: ${JSON.stringify(chunk[key])}`); } else { console.log(chunk[key]); } currentKey = key; }}
question: "What is task decomposition?"context: [{"pageContent":"Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":176,"to":181}}}},{"pageContent":"Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\nSelf-Reflection#","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":182,"to":184}}}},{"pageContent":"Agent System Overview\n \n Component One: Planning\n \n \n Task Decomposition\n \n Self-Reflection\n \n \n Component Two: Memory\n \n \n Types of Memory\n \n Maximum Inner Product Search (MIPS)\n \n \n Component Three: Tool Use\n \n Case Studies\n \n \n Scientific Discovery Agent\n \n Generative Agents Simulation\n \n Proof-of-Concept Examples\n \n \n Challenges\n \n Citation\n \n References","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":112,"to":146}}}},{"pageContent":"(3) Task execution: Expert models execute on the specific tasks and log results.\nInstruction:\n\nWith the input and the inference results, the AI assistant needs to describe the process and results. The previous stages can be formed as - User Input: {{ User Input }}, Task Planning: {{ Tasks }}, Model Selection: {{ Model Assignment }}, Task Execution: {{ Predictions }}. You must first answer the user's request in a straightforward manner. Then describe the task process and show your analysis and model inference results to the user in the first person. If inference results contain a file path, must tell the user the complete file path.","metadata":{"source":"https://lilianweng.github.io/posts/2023-06-23-agent/","loc":{"lines":{"from":277,"to":280}}}}]answer: ""Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done through various methods such as using prompting techniques, task-specific instructions, or human inputs. Another approach involves outsourcing the planning step to an external classical planner.
"answer"
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to stream responses from a QA chain.
Next, check out some of the other how-to guides around RAG, such as [how to add chat history](/v0.2/docs/how_to/qa_chat_history_how_to).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to return sources
](/v0.2/docs/how_to/qa_sources)[
Next
How to construct filters
](/v0.2/docs/how_to/query_constructing_filters)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Chain with sources](#chain-with-sources)
* [Streaming final outputs](#streaming-final-outputs)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/qa_sources | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to return sources
On this page
How to return sources
=====================
Prerequisites
This guide assumes familiarity with the following:
* [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/)
Often in Q&A applications it’s important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation.
We’ll be using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng for retrieval content this notebook.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models), and [VectorStore](/v0.2/docs/concepts#vectorstores) or [Retriever](/v0.2/docs/concepts#retrievers).
We’ll use the following packages:
npm install --save langchain @langchain/openai cheerio
We need to set environment variable `OPENAI_API_KEY`:
export OPENAI_API_KEY=YOUR_KEY
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
Chain without sources[](#chain-without-sources "Direct link to Chain without sources")
---------------------------------------------------------------------------------------
Here is the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Quickstart](/v0.2/docs/tutorials/qa_chat_history/).
import "cheerio";import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { pull } from "langchain/hub";import { ChatPromptTemplate } from "@langchain/core/prompts";import { formatDocumentsAsString } from "langchain/util/document";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const loader = new CheerioWebBaseLoader( "https://lilianweng.github.io/posts/2023-06-23-agent/");const docs = await loader.load();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splits = await textSplitter.splitDocuments(docs);const vectorStore = await MemoryVectorStore.fromDocuments( splits, new OpenAIEmbeddings());// Retrieve and generate using the relevant snippets of the blog.const retriever = vectorStore.asRetriever();const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const ragChain = RunnableSequence.from([ { context: retriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, llm, new StringOutputParser(),]);
Let’s see what this prompt actually looks like:
console.log(prompt.promptMessages.map((msg) => msg.prompt.template).join("\n"));
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.Question: {question}Context: {context}Answer:
await ragChain.invoke("What is task decomposition?");
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 208 more characters
Adding sources[](#adding-sources "Direct link to Adding sources")
------------------------------------------------------------------
With LCEL, we can easily pass the retrieved documents through the chain and return them in the final response:
import { RunnableMap, RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { formatDocumentsAsString } from "langchain/util/document";const ragChainFromDocs = RunnableSequence.from([ RunnablePassthrough.assign({ context: (input) => formatDocumentsAsString(input.context), }), prompt, llm, new StringOutputParser(),]);let ragChainWithSource = new RunnableMap({ steps: { context: retriever, question: new RunnablePassthrough() },});ragChainWithSource = ragChainWithSource.assign({ answer: ragChainFromDocs });await ragChainWithSource.invoke("What is Task Decomposition");
{ question: "What is Task Decomposition", context: [ Document { pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" + "Component One: Planning#\n" + "A complicated ta"... 898 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Agent System Overview\n" + " \n" + " Component One: Planning\n" + " "... 850 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } }, Document { pageContent: "Resources:\n" + "1. Internet access for searches and information gathering.\n" + "2. Long Term memory management"... 456 more characters, metadata: { source: "https://lilianweng.github.io/posts/2023-06-23-agent/", loc: { lines: [Object] } } } ], answer: "Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 256 more characters}
Check out the [LangSmith trace](https://smith.langchain.com/public/f07e78b6-cafc-41fd-af54-892c92263b09/r) here to see the internals of the chain.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to return sources from your QA chains.
Next, check out some of the other guides around RAG, such as [how to stream responses](/v0.2/docs/how_to/qa_streaming).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to return citations
](/v0.2/docs/how_to/qa_citations)[
Next
How to stream from a question-answering chain
](/v0.2/docs/how_to/qa_streaming)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Chain without sources](#chain-without-sources)
* [Adding sources](#adding-sources)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |