url
stringlengths 25
141
| content
stringlengths 2.14k
402k
|
---|---|
https://js.langchain.com/v0.2/docs/how_to/qa_citations | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to return citations
On this page
How to return citations
=======================
Prerequisites
This guide assumes familiarity with the following:
* [Retrieval-augmented generation](/v0.2/docs/tutorials/rag/)
* [Returning structured data from a model](/v0.2/docs/how_to/structured_output/)
How can we get a model to cite which parts of the source documents it referenced in its response?
To explore some techniques for extracting citations, let’s first create a simple RAG chain. To start we’ll just retrieve from the web using the [`TavilySearchAPIRetriever`](https://js.langchain.com/docs/integrations/retrievers/tavily).
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Dependencies[](#dependencies "Direct link to Dependencies")
We’ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any [ChatModel](/v0.2/docs/concepts/#chat-models) or [LLM](/v0.2/docs/concepts#llms), [Embeddings](/v0.2/docs/concepts#embedding-models/), and [VectorStore](/v0.2/docs/concepts#vectorstores/) or [Retriever](/v0.2/docs/concepts#retrievers).
We’ll use the following packages:
npm install --save langchain @langchain/community @langchain/openai
We need to set environment variables for Tavily Search & OpenAI:
export OPENAI_API_KEY=YOUR_KEYexport TAVILY_API_KEY=YOUR_KEY
### LangSmith[](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
### Initial setup[](#initial-setup "Direct link to Initial setup")
import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const retriever = new TavilySearchAPIRetriever({ k: 6,});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You're a helpful AI assistant. Given a user question and some web article snippets, answer the user question. If none of the articles answer the question, just say you don't know.\n\nHere are the web articles:{context}", ], ["human", "{question}"],]);
Now that we’ve got a model, retriever and prompt, let’s chain them all together. We’ll need to add some logic for formatting our retrieved `Document`s to a string that can be passed to our prompt. We’ll make it so our chain returns both the answer and the retrieved Documents.
import { Document } from "@langchain/core/documents";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableMap, RunnablePassthrough } from "@langchain/core/runnables";/** * Format the documents into a readable string. */const formatDocs = (input: Record<string, any>): string => { const { docs } = input; return ( "\n\n" + docs .map( (doc: Document) => `Article title: ${doc.metadata.title}\nArticle Snippet: ${doc.pageContent}` ) .join("\n\n") );};// subchain for generating an answer once we've done retrievalconst answerChain = prompt.pipe(llm).pipe(new StringOutputParser());const map = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});// complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs.const chain = map .assign({ context: formatDocs }) .assign({ answer: answerChain }) .pick(["answer", "docs"]);await chain.invoke("How fast are cheetahs?");
{ answer: "Cheetahs are the fastest land animals on Earth. They can reach speeds as high as 75 mph or 120 km/h."... 124 more characters, docs: [ Document { pageContent: "Contact Us − +\n" + "Address\n" + "Smithsonian's National Zoo & Conservation Biology Institute 3001 Connecticut"... 1343 more characters, metadata: { title: "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", source: "https://nationalzoo.si.edu/animals/cheetah", score: 0.96283, images: null } }, Document { pageContent: "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the che"... 880 more characters, metadata: { title: "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", source: "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-abou"... 21 more characters, score: 0.96052, images: null } }, Document { pageContent: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 1048 more characters, metadata: { title: "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", source: "https://www.britannica.com/animal/cheetah-mammal", score: 0.93137, images: null } }, Document { pageContent: "The science of cheetah speed\n" + "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, cap"... 738 more characters, metadata: { title: "How Fast Can a Cheetah Run? - ThoughtCo", source: "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", score: 0.91385, images: null } }, Document { pageContent: "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the ma"... 60 more characters, metadata: { title: "The Science of a Cheetah's Speed | National Geographic", source: "https://www.youtube.com/watch?v=icFMTB0Pi0g", score: 0.90358, images: null } }, Document { pageContent: "If a lion comes along, the cheetah will abandon its catch -- it can't fight off a lion, and chances "... 911 more characters, metadata: { title: "What makes a cheetah run so fast? | HowStuffWorks", source: "https://animals.howstuffworks.com/mammals/cheetah-speed.htm", score: 0.87824, images: null } } ]}
See a LangSmith trace [here](https://smith.langchain.com/public/bb0ed37e-b2be-4ae9-8b0d-ce2aff0b4b5e/r) that shows off the internals.
Tool calling[](#tool-calling "Direct link to Tool calling")
------------------------------------------------------------
### Cite documents[](#cite-documents "Direct link to Cite documents")
Let’s try using [tool calling](/v0.2/docs/how_to/tool_calling) to make the model specify which of the provided documents it’s actually referencing when answering. LangChain has some utils for converting objects or [Zod](https://zod.dev) objects to the JSONSchema format expected by providers like OpenAI. We’ll use the [`.withStructuredOutput()`](/v0.2/docs/how_to/structured_output/) method to get the model to output data matching our desired schema:
import { z } from "zod";const llmWithTool1 = llm.withStructuredOutput( z .object({ answer: z .string() .describe( "The answer to the user question, which is based only on the given sources." ), citations: z .array(z.number()) .describe( "The integer IDs of the SPECIFIC sources which justify the answer." ), }) .describe("A cited source from the given text"), { name: "cited_answers", });const exampleQ = `What is Brian's height?Source: 1Information: Suzy is 6'2"Source: 2Information: Jeremiah is blondeSource: 3Information: Brian is 3 inches shorter than Suzy`;await llmWithTool1.invoke(exampleQ);
{ answer: `Brian is 6'2" - 3 inches = 5'11" tall.`, citations: [ 1, 3 ]}
See a LangSmith trace [here](https://smith.langchain.com/public/28736c75-122e-4deb-9916-55c73eea3167/r) that shows off the internals
Now we’re ready to put together our chain
import { Document } from "@langchain/core/documents";const formatDocsWithId = (docs: Array<Document>): string => { return ( "\n\n" + docs .map( (doc: Document, idx: number) => `Source ID: ${idx}\nArticle title: ${doc.metadata.title}\nArticle Snippet: ${doc.pageContent}` ) .join("\n\n") );};// subchain for generating an answer once we've done retrievalconst answerChain1 = prompt.pipe(llmWithTool1);const map1 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});// complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs.const chain1 = map1 .assign({ context: (input: { docs: Array<Document> }) => formatDocsWithId(input.docs), }) .assign({ cited_answer: answerChain1 }) .pick(["cited_answer", "docs"]);await chain1.invoke("How fast are cheetahs?");
{ cited_answer: { answer: "Cheetahs can reach speeds as high as 75 mph or 120 km/h.", citations: [ 1, 2, 5 ] }, docs: [ Document { pageContent: "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the ma"... 60 more characters, metadata: { title: "The Science of a Cheetah's Speed | National Geographic", source: "https://www.youtube.com/watch?v=icFMTB0Pi0g", score: 0.97858, images: null } }, Document { pageContent: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 1048 more characters, metadata: { title: "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", source: "https://www.britannica.com/animal/cheetah-mammal", score: 0.97213, images: null } }, Document { pageContent: "The science of cheetah speed\n" + "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, cap"... 738 more characters, metadata: { title: "How Fast Can a Cheetah Run? - ThoughtCo", source: "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", score: 0.95759, images: null } }, Document { pageContent: "Contact Us − +\n" + "Address\n" + "Smithsonian's National Zoo & Conservation Biology Institute 3001 Connecticut"... 1343 more characters, metadata: { title: "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", source: "https://nationalzoo.si.edu/animals/cheetah", score: 0.92422, images: null } }, Document { pageContent: "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the che"... 880 more characters, metadata: { title: "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", source: "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-abou"... 21 more characters, score: 0.91867, images: null } }, Document { pageContent: "The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn"... 2527 more characters, metadata: { title: "Cheetah - Wikipedia", source: "https://en.wikipedia.org/wiki/Cheetah", score: 0.81617, images: null } } ]}
See a LangSmith trace [here](https://smith.langchain.com/public/86814255-b9b0-4c4f-9463-e795c9961451/r) that shows off the internals.
### Cite snippets[](#cite-snippets "Direct link to Cite snippets")
What if we want to cite actual text spans? We can try to get our model to return these, too.
**Note**: Note that if we break up our documents so that we have many documents with only a sentence or two instead of a few long documents, citing documents becomes roughly equivalent to citing snippets, and may be easier for the model because the model just needs to return an identifier for each snippet instead of the actual text. We recommend trying both approaches and evaluating.
import { Document } from "@langchain/core/documents";const citationSchema = z.object({ sourceId: z .number() .describe( "The integer ID of a SPECIFIC source which justifies the answer." ), quote: z .string() .describe( "The VERBATIM quote from the specified source that justifies the answer." ),});const llmWithTool2 = llm.withStructuredOutput( z.object({ answer: z .string() .describe( "The answer to the user question, which is based only on the given sources." ), citations: z .array(citationSchema) .describe("Citations from the given sources that justify the answer."), }), { name: "quoted_answer", });const answerChain2 = prompt.pipe(llmWithTool2);const map2 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});// complete chain that calls the retriever -> formats docs to string -> runs answer subchain -> returns just the answer and retrieved docs.const chain2 = map2 .assign({ context: (input: { docs: Array<Document> }) => formatDocsWithId(input.docs), }) .assign({ quoted_answer: answerChain2 }) .pick(["quoted_answer", "docs"]);await chain2.invoke("How fast are cheetahs?");
{ quoted_answer: { answer: "Cheetahs can reach speeds of up to 120kph or 75mph, making them the world’s fastest land animals.", citations: [ { sourceId: 5, quote: "Cheetahs can reach speeds of up to 120kph or 75mph, making them the world’s fastest land animals." }, { sourceId: 1, quote: "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as hi"... 25 more characters }, { sourceId: 3, quote: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 72 more characters } ] }, docs: [ Document { pageContent: "Contact Us − +\n" + "Address\n" + "Smithsonian's National Zoo & Conservation Biology Institute 3001 Connecticut"... 1343 more characters, metadata: { title: "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", source: "https://nationalzoo.si.edu/animals/cheetah", score: 0.95973, images: null } }, Document { pageContent: "The science of cheetah speed\n" + "The cheetah (Acinonyx jubatus) is the fastest land animal on Earth, cap"... 738 more characters, metadata: { title: "How Fast Can a Cheetah Run? - ThoughtCo", source: "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", score: 0.92749, images: null } }, Document { pageContent: "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the che"... 880 more characters, metadata: { title: "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", source: "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-abou"... 21 more characters, score: 0.92417, images: null } }, Document { pageContent: "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely r"... 1048 more characters, metadata: { title: "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", source: "https://www.britannica.com/animal/cheetah-mammal", score: 0.92341, images: null } }, Document { pageContent: "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the ma"... 60 more characters, metadata: { title: "The Science of a Cheetah's Speed | National Geographic", source: "https://www.youtube.com/watch?v=icFMTB0Pi0g", score: 0.90025, images: null } }, Document { pageContent: "In fact, they are more closely related to kangaroos…\n" + "Read more\n" + "Animals on the Galapagos Islands: A G"... 987 more characters, metadata: { title: "How fast can cheetahs run, and what enables their incredible speed?", source: "https://wildlifefaq.com/cheetah-speed/", score: 0.87121, images: null } } ]}
You can check out a LangSmith trace [here](https://smith.langchain.com/public/f0588adc-1914-45e8-a2ed-4fa028cea0e1/r) that shows off the internals.
Direct prompting[](#direct-prompting "Direct link to Direct prompting")
------------------------------------------------------------------------
Not all models support tool-calling. We can achieve similar results with direct prompting. Let’s see what this looks like using an older Anthropic chat model that is particularly proficient in working with XML:
### Setup[](#setup-1 "Direct link to Setup")
Install the LangChain Anthropic integration package:
npm install @langchain/anthropic
Add your Anthropic API key to your environment:
export ANTHROPIC_API_KEY=YOUR_KEY
import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { XMLOutputParser } from "@langchain/core/output_parsers";import { Document } from "@langchain/core/documents";import { RunnableLambda, RunnablePassthrough, RunnableMap,} from "@langchain/core/runnables";const anthropic = new ChatAnthropic({ model: "claude-instant-1.2", temperature: 0,});const system = `You're a helpful AI assistant. Given a user question and some web article snippets,answer the user question and provide citations. If none of the articles answer the question, just say you don't know.Remember, you must return both an answer and citations. A citation consists of a VERBATIM quote thatjustifies the answer and the ID of the quote article. Return a citation for every quote across all articlesthat justify the answer. Use the following format for your final output:<cited_answer> <answer></answer> <citations> <citation><source_id></source_id><quote></quote></citation> <citation><source_id></source_id><quote></quote></citation> ... </citations></cited_answer>Here are the web articles:{context}`;const anthropicPrompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const formatDocsToXML = (docs: Array<Document>): string => { const formatted: Array<string> = []; docs.forEach((doc, idx) => { const docStr = `<source id="${idx}"> <title>${doc.metadata.title}</title> <article_snippet>${doc.pageContent}</article_snippet></source>`; formatted.push(docStr); }); return `\n\n<sources>${formatted.join("\n")}</sources>`;};const format3 = new RunnableLambda({ func: (input: { docs: Array<Document> }) => formatDocsToXML(input.docs),});const answerChain = anthropicPrompt .pipe(anthropic) .pipe(new XMLOutputParser()) .pipe( new RunnableLambda({ func: (input: { cited_answer: any }) => input.cited_answer, }) );const map3 = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});const chain3 = map3 .assign({ context: format3 }) .assign({ cited_answer: answerChain }) .pick(["cited_answer", "docs"]);const res = await chain3.invoke("How fast are cheetahs?");console.log(JSON.stringify(res, null, 2));
{ "cited_answer": [ { "answer": "Cheetahs can reach top speeds of around 75 mph, but can only maintain bursts of speed for short distances before tiring." }, { "citations": [ { "citation": [ { "source_id": "1" }, { "quote": "Scientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower." } ] }, { "citation": [ { "source_id": "3" }, { "quote": "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey." } ] } ] } ], "docs": [ { "pageContent": "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the magazine's November 2012 iPad edition. See the other: http:...", "metadata": { "title": "The Science of a Cheetah's Speed | National Geographic", "source": "https://www.youtube.com/watch?v=icFMTB0Pi0g", "score": 0.96603, "images": null } }, { "pageContent": "The science of cheetah speed\nThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h. Cheetahs are predators that sneak up on their prey and sprint a short distance to chase and attack.\n Key Takeaways: How Fast Can a Cheetah Run?\nFastest Cheetah on Earth\nScientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower. The top 10 fastest animals are:\nThe pronghorn, an American animal resembling an antelope, is the fastest land animal in the Western Hemisphere. While a cheetah's top speed ranges from 65 to 75 mph (104 to 120 km/h), its average speed is only 40 mph (64 km/hr), punctuated by short bursts at its top speed. Basically, if a predator threatens to take a cheetah's kill or attack its young, a cheetah has to run.\n", "metadata": { "title": "How Fast Can a Cheetah Run? - ThoughtCo", "source": "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", "score": 0.96212, "images": null } }, { "pageContent": "Now, their only hope lies in the hands of human conservationists, working tirelessly to save the cheetahs, the leopards and all the other wildlife of the scattered savannas and other habitats of Africa and Asia.\n Their tough paw pads and grippy claws are made to grab at the ground, and their large nasal passages and lungs facilitate the flow of oxygen and allow their rapid intake of air as they reach their top speeds.\n And though the two cats share a similar coloration, a cheetah's spots are circular while a leopard's spots are rose-shaped \"rosettes,\" with the centers of their spots showing off the tan color of their coats.\n Also classified as \"vulnerable\" are two of the cheetah's foremost foes, the lion and the leopard, the latter of which is commonly confused for the cheetah thanks to its own flecked fur.\n The cats are also consumers of the smallest of the bigger, bulkier antelopes, such as sables and kudus, and are known to gnaw on the occasional rabbit or bird.\n", "metadata": { "title": "How Fast Are Cheetahs, and Other Fascinating Facts About the World's ...", "source": "https://www.discovermagazine.com/planet-earth/how-fast-are-cheetahs-and-other-fascinating-facts-about-the-worlds-quickest", "score": 0.95688, "images": null } }, { "pageContent": "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey.\ncheetah,\n(Acinonyx jubatus),\none of the world’s most-recognizable cats, known especially for its speed. Their fur is dark and includes a thick yellowish gray mane along the back, a trait that presumably offers better camouflage and increased protection from high temperatures during the day and low temperatures at night during the first few months of life. Cheetahs eat a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan).\n A cheetah eats a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan). Their faces are distinguished by prominent black lines that curve from the inner corner of each eye to the outer corners of the mouth, like a well-worn trail of inky tears.", "metadata": { "title": "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", "source": "https://www.britannica.com/animal/cheetah-mammal", "score": 0.95589, "images": null } }, { "pageContent": "Contact Us − +\nAddress\nSmithsonian's National Zoo & Conservation Biology Institute 3001 Connecticut Ave., NW Washington, DC 20008\nAbout the Zoo\n−\n+\nCareers\n−\n+\nNews & Media\n−\n+\nFooter Donate\n−\n+\nShop\n−\n+\nFollow us on social media\nSign Up for Emails\nFooter - SI logo, privacy, terms Conservation Efforts\nHistorically, cheetahs ranged widely throughout Africa and Asia, from the Cape of Good Hope to the Mediterranean, throughout the Arabian Peninsula and the Middle East, from Israel, India and Pakistan north to the northern shores of the Caspian and Aral Seas, and west through Uzbekistan, Turkmenistan, Afghanistan, and Pakistan into central India. Header Links\nToday's hours: 8 a.m. to 4 p.m. (last entry 3 p.m.)\nMega menu\nAnimals Global Nav Links\nElephant Cam\nSee the Smithsonian's National Zoo's Asian elephants — Spike, Bozie, Kamala, Swarna and Maharani — both inside the Elephant Community Center and outside in their yards.\n Conservation Global Nav Links\nAbout the Smithsonian Conservation Biology Institute\nCheetah\nAcinonyx jubatus\nBuilt for speed, the cheetah can accelerate from zero to 45 in just 2.5 seconds and reach top speeds of 60 to 70 mph, making it the fastest land mammal! Fun Facts\nConservation Status\nCheetah News\nTaxonomic Information\nAnimal News\nNZCBI staff in Front Royal, Virginia, are mourning the loss of Walnut, a white-naped crane who became an internet sensation for choosing one of her keepers as her mate.\n", "metadata": { "title": "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", "source": "https://nationalzoo.si.edu/animals/cheetah", "score": 0.94744, "images": null } }, { "pageContent": "The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at 88.5 km/h (55.0 mph)[96] and the springbok at 88 km/h (55 mph),[97] but the cheetah additionally has an exceptional acceleration.[98]\nOne stride of a galloping cheetah measures 4 to 7 m (13 to 23 ft); the stride length and the number of jumps increases with speed.[60] During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length.[99] Running cheetahs can retain up to 90% of the heat generated during the chase. In December 2016 the results of an extensive survey detailing the distribution and demography of cheetahs throughout the range were published; the researchers recommended listing the cheetah as Endangered on the IUCN Red List.[25]\nThe cheetah was reintroduced in Malawi in 2017.[160]\nIn Asia\nIn 2001, the Iranian government collaborated with the CCF, the IUCN, Panthera Corporation, UNDP and the Wildlife Conservation Society on the Conservation of Asiatic Cheetah Project (CACP) to protect the natural habitat of the Asiatic cheetah and its prey.[161][162] Individuals on the periphery of the prey herd are common targets; vigilant prey which would react quickly on seeing the cheetah are not preferred.[47][60][122]\nCheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion.[66] Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day.[123] Cheetahs use their vision to hunt instead of their sense of smell; they keep a lookout for prey from resting sites or low branches. This significantly sharpens the vision and enables the cheetah to swiftly locate prey against the horizon.[61][86] The cheetah is unable to roar due to the presence of a sharp-edged vocal fold within the larynx.[2][87]\nSpeed and acceleration\nThe cheetah is the world's fastest land animal.[88][89][90][91][92] Estimates of the maximum speed attained range from 80 to 128 km/h (50 to 80 mph).[60][63] A commonly quoted value is 112 km/h (70 mph), recorded in 1957, but this measurement is disputed.[93] The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skull.[60][65] A study suggested that the limited retraction of the cheetah's claws may result from the earlier truncation of the development of the middle phalanx bone in cheetahs.[77]\nThe cheetah has a total of 30 teeth; the dental formula is 3.1.3.13.1.2.1.", "metadata": { "title": "Cheetah - Wikipedia", "source": "https://en.wikipedia.org/wiki/Cheetah", "score": 0.81312, "images": null } } ]}
Check out this LangSmith trace [here](https://smith.langchain.com/public/e2e938e8-f847-4ea8-bc84-43d4eaf8e524/r) for more on the internals.
Retrieval post-processing[](#retrieval-post-processing "Direct link to Retrieval post-processing")
---------------------------------------------------------------------------------------------------
Another approach is to post-process our retrieved documents to compress the content, so that the source content is already minimal enough that we don’t need the model to cite specific sources or spans. For example, we could break up each document into a sentence or two, embed those and keep only the most relevant ones. LangChain has some built-in components for this. Here we’ll use a [`RecursiveCharacterTextSplitter`](https://js.langchain.com/docs/modules/data_connection/document_transformers/recursive_text_splitter), which creates chunks of a specified size by splitting on separator substrings, and an [`EmbeddingsFilter`](https://js.langchain.com/docs/modules/data_connection/retrievers/contextual_compression#embeddingsfilter), which keeps only the texts with the most relevant embeddings.
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter";import { OpenAIEmbeddings } from "@langchain/openai";import { DocumentInterface } from "@langchain/core/documents";import { RunnableMap, RunnablePassthrough } from "@langchain/core/runnables";const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 400, chunkOverlap: 0, separators: ["\n\n", "\n", ".", " "], keepSeparator: false,});const compressor = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), k: 10,});const splitAndFilter = async (input): Promise<Array<DocumentInterface>> => { const { docs, question } = input; const splitDocs = await splitter.splitDocuments(docs); const statefulDocs = await compressor.compressDocuments(splitDocs, question); return statefulDocs;};const retrieveMap = RunnableMap.from({ question: new RunnablePassthrough(), docs: retriever,});const retriever = retrieveMap.pipe(splitAndFilter);const docs = await retriever.invoke("How fast are cheetahs?");for (const doc of docs) { console.log(doc.pageContent, "\n\n");}
The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey.cheetah,(Acinonyx jubatus),The science of cheetah speedThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h. Cheetahs are predators that sneak up on their prey and sprint a short distance to chase and attack. Key Takeaways: How Fast Can a Cheetah Run?Fastest Cheetah on EarthBuilt for speed, the cheetah can accelerate from zero to 45 in just 2.5 seconds and reach top speeds of 60 to 70 mph, making it the fastest land mammal! Fun FactsConservation StatusCheetah NewsTaxonomic InformationAnimal NewsNZCBI staff in Front Royal, Virginia, are mourning the loss of Walnut, a white-naped crane who became an internet sensation for choosing one of her keepers as her mate.The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at 88.5 km/h (55.0 mph)[96] and the springbok at 88 km/h (55 mph),[97] but the cheetah additionally has an exceptional acceleration.[98]The cheetah is the world's fastest land animal.[88][89][90][91][92] Estimates of the maximum speed attained range from 80 to 128 km/h (50 to 80 mph).[60][63] A commonly quoted value is 112 km/h (70 mph), recorded in 1957, but this measurement is disputed.[93] The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skullScientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower. The top 10 fastest animals are:One stride of a galloping cheetah measures 4 to 7 m (13 to 23 ft); the stride length and the number of jumps increases with speed.[60] During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length.[99] Running cheetahs can retain up to 90% of the heat generated during the chaseThe pronghorn, an American animal resembling an antelope, is the fastest land animal in the Western Hemisphere. While a cheetah's top speed ranges from 65 to 75 mph (104 to 120 km/h), its average speed is only 40 mph (64 km/hr), punctuated by short bursts at its top speed. Basically, if a predator threatens to take a cheetah's kill or attack its young, a cheetah has to run.A cheetah eats a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan). Their faces are distinguished by prominent black lines that curve from the inner corner of each eye to the outer corners of the mouth, like a well-worn trail of inky tears.Cheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion.[66] Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day
See the LangSmith trace [here](https://smith.langchain.com/public/ae6b1f52-c1fe-49ec-843c-92edf2104652/r) to see the internals.
const chain4 = retrieveMap .assign({ context: formatDocs }) .assign({ answer: answerChain }) .pick(["answer", "docs"]);// Note the documents have an article "summary" in the metadata that is now much longer than the// actual document page content. This summary isn't actually passed to the model.const res = await chain4.invoke("How fast are cheetahs?");console.log(JSON.stringify(res, null, 2));
{ "answer": [ { "answer": "\nCheetahs are the fastest land animals. They can reach top speeds between 75-81 mph (120-130 km/h). \n" }, { "citations": [ { "citation": [ { "source_id": "Article title: How Fast Can a Cheetah Run? - ThoughtCo" }, { "quote": "The science of cheetah speed\nThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h." } ] }, { "citation": [ { "source_id": "Article title: Cheetah - Wikipedia" }, { "quote": "Scientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower." } ] } ] } ], "docs": [ { "pageContent": "The science of cheetah speed\nThe cheetah (Acinonyx jubatus) is the fastest land animal on Earth, capable of reaching speeds as high as 75 mph or 120 km/h. Cheetahs are predators that sneak up on their prey and sprint a short distance to chase and attack.\n Key Takeaways: How Fast Can a Cheetah Run?\nFastest Cheetah on Earth\nScientists calculate a cheetah's top speed is 75 mph, but the fastest recorded speed is somewhat slower. The top 10 fastest animals are:\nThe pronghorn, an American animal resembling an antelope, is the fastest land animal in the Western Hemisphere. While a cheetah's top speed ranges from 65 to 75 mph (104 to 120 km/h), its average speed is only 40 mph (64 km/hr), punctuated by short bursts at its top speed. Basically, if a predator threatens to take a cheetah's kill or attack its young, a cheetah has to run.\n", "metadata": { "title": "How Fast Can a Cheetah Run? - ThoughtCo", "source": "https://www.thoughtco.com/how-fast-can-a-cheetah-run-4587031", "score": 0.96949, "images": null } }, { "pageContent": "The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at 88.5 km/h (55.0 mph)[96] and the springbok at 88 km/h (55 mph),[97] but the cheetah additionally has an exceptional acceleration.[98]\nOne stride of a galloping cheetah measures 4 to 7 m (13 to 23 ft); the stride length and the number of jumps increases with speed.[60] During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length.[99] Running cheetahs can retain up to 90% of the heat generated during the chase. In December 2016 the results of an extensive survey detailing the distribution and demography of cheetahs throughout the range were published; the researchers recommended listing the cheetah as Endangered on the IUCN Red List.[25]\nThe cheetah was reintroduced in Malawi in 2017.[160]\nIn Asia\nIn 2001, the Iranian government collaborated with the CCF, the IUCN, Panthera Corporation, UNDP and the Wildlife Conservation Society on the Conservation of Asiatic Cheetah Project (CACP) to protect the natural habitat of the Asiatic cheetah and its prey.[161][162] Individuals on the periphery of the prey herd are common targets; vigilant prey which would react quickly on seeing the cheetah are not preferred.[47][60][122]\nCheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion.[66] Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day.[123] Cheetahs use their vision to hunt instead of their sense of smell; they keep a lookout for prey from resting sites or low branches. This significantly sharpens the vision and enables the cheetah to swiftly locate prey against the horizon.[61][86] The cheetah is unable to roar due to the presence of a sharp-edged vocal fold within the larynx.[2][87]\nSpeed and acceleration\nThe cheetah is the world's fastest land animal.[88][89][90][91][92] Estimates of the maximum speed attained range from 80 to 128 km/h (50 to 80 mph).[60][63] A commonly quoted value is 112 km/h (70 mph), recorded in 1957, but this measurement is disputed.[93] The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skull.[60][65] A study suggested that the limited retraction of the cheetah's claws may result from the earlier truncation of the development of the middle phalanx bone in cheetahs.[77]\nThe cheetah has a total of 30 teeth; the dental formula is 3.1.3.13.1.2.1.", "metadata": { "title": "Cheetah - Wikipedia", "source": "https://en.wikipedia.org/wiki/Cheetah", "score": 0.96423, "images": null } }, { "pageContent": "One of two videos from National Geographic's award-winning multimedia coverage of cheetahs in the magazine's November 2012 iPad edition. See the other: http:...", "metadata": { "title": "The Science of a Cheetah's Speed | National Geographic", "source": "https://www.youtube.com/watch?v=icFMTB0Pi0g", "score": 0.96071, "images": null } }, { "pageContent": "Contact Us − +\nAddress\nSmithsonian's National Zoo & Conservation Biology Institute 3001 Connecticut Ave., NW Washington, DC 20008\nAbout the Zoo\n−\n+\nCareers\n−\n+\nNews & Media\n−\n+\nFooter Donate\n−\n+\nShop\n−\n+\nFollow us on social media\nSign Up for Emails\nFooter - SI logo, privacy, terms Conservation Efforts\nHistorically, cheetahs ranged widely throughout Africa and Asia, from the Cape of Good Hope to the Mediterranean, throughout the Arabian Peninsula and the Middle East, from Israel, India and Pakistan north to the northern shores of the Caspian and Aral Seas, and west through Uzbekistan, Turkmenistan, Afghanistan, and Pakistan into central India. Header Links\nToday's hours: 8 a.m. to 4 p.m. (last entry 3 p.m.)\nMega menu\nAnimals Global Nav Links\nElephant Cam\nSee the Smithsonian's National Zoo's Asian elephants — Spike, Bozie, Kamala, Swarna and Maharani — both inside the Elephant Community Center and outside in their yards.\n Conservation Global Nav Links\nAbout the Smithsonian Conservation Biology Institute\nCheetah\nAcinonyx jubatus\nBuilt for speed, the cheetah can accelerate from zero to 45 in just 2.5 seconds and reach top speeds of 60 to 70 mph, making it the fastest land mammal! Fun Facts\nConservation Status\nCheetah News\nTaxonomic Information\nAnimal News\nNZCBI staff in Front Royal, Virginia, are mourning the loss of Walnut, a white-naped crane who became an internet sensation for choosing one of her keepers as her mate.\n", "metadata": { "title": "Cheetah | Smithsonian's National Zoo and Conservation Biology Institute", "source": "https://nationalzoo.si.edu/animals/cheetah", "score": 0.91577, "images": null } }, { "pageContent": "The maximum speed cheetahs have been measured at is 114 km (71 miles) per hour, and they routinely reach velocities of 80–100 km (50–62 miles) per hour while pursuing prey.\ncheetah,\n(Acinonyx jubatus),\none of the world’s most-recognizable cats, known especially for its speed. Their fur is dark and includes a thick yellowish gray mane along the back, a trait that presumably offers better camouflage and increased protection from high temperatures during the day and low temperatures at night during the first few months of life. Cheetahs eat a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan).\n A cheetah eats a variety of small animals, including game birds, rabbits, small antelopes (including the springbok, impala, and gazelle), young warthogs, and larger antelopes (such as the kudu, hartebeest, oryx, and roan). Their faces are distinguished by prominent black lines that curve from the inner corner of each eye to the outer corners of the mouth, like a well-worn trail of inky tears.", "metadata": { "title": "Cheetah | Description, Speed, Habitat, Diet, Cubs, & Facts", "source": "https://www.britannica.com/animal/cheetah-mammal", "score": 0.91163, "images": null } }, { "pageContent": "If a lion comes along, the cheetah will abandon its catch -- it can't fight off a lion, and chances are, the cheetah will lose its life along with its prey if it doesn't get out of there fast enough.\n Advertisement\nLots More Information\nMore Great Links\nSources\nPlease copy/paste the following text to properly cite this HowStuffWorks.com article:\nAdvertisement\nAdvertisement\nAdvertisement\nAdvertisement\nAdvertisement If confronted, a roughly 125-pound cheetah will always run rather than fight -- it's too weak, light and thin to have any chance against something like a lion, which can be twice as long as a cheetah and weigh more than 400 pounds (181.4 kg) Cheetah moms spend a lot of time teaching their cubs to chase, sometimes dragging live animals back to the den so the cubs can practice the chase-and-catch process.\n It's more like a bound at that speed, completing up to three strides per second, with only one foot on the ground at any time and several stages when feet don't touch the ground at all.", "metadata": { "title": "What makes a cheetah run so fast? | HowStuffWorks", "source": "https://animals.howstuffworks.com/mammals/cheetah-speed.htm", "score": 0.89019, "images": null } } ]}
Check out the LangSmith trace [here](https://smith.langchain.com/public/b767cca0-6061-4208-99f2-7f522b94a587/r) to see the internals.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned a few ways to return citations from your QA chains.
Next, check out some of the other guides in this section, such as [how to add chat history](/v0.2/docs/how_to/qa_chat_history_how_to).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add chat history to a question-answering chain
](/v0.2/docs/how_to/qa_chat_history_how_to)[
Next
How to return sources
](/v0.2/docs/how_to/qa_sources)
* [Setup](#setup)
* [Dependencies](#dependencies)
* [LangSmith](#langsmith)
* [Initial setup](#initial-setup)
* [Tool calling](#tool-calling)
* [Cite documents](#cite-documents)
* [Cite snippets](#cite-snippets)
* [Direct prompting](#direct-prompting)
* [Setup](#setup-1)
* [Retrieval post-processing](#retrieval-post-processing)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/query_few_shot | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to add examples to the prompt
On this page
How to add examples to the prompt
=================================
Prerequisites
This guide assumes familiarity with the following:
* [Query analysis](/v0.2/docs/tutorials/query_analysis)
As our query analysis becomes more complex, the LLM may struggle to understand how exactly it should respond in certain scenarios. In order to improve performance here, we can add examples to the prompt to guide the LLM.
Let’s take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [query analysis tutorial](/v0.2/docs/tutorials/query_analysis).
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i zod uuid
yarn add zod uuid
pnpm add zod uuid
### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Query schema[](#query-schema "Direct link to Query schema")
------------------------------------------------------------
We’ll define a query schema that we want our model to output. To make our query analysis a bit more interesting, we’ll add a `subQueries` field that contains more narrow questions derived from the top level question.
import { z } from "zod";const subQueriesDescription = `If the original question contains multiple distinct sub-questions,or if there are more generic questions that would be helpful to answer inorder to answer the original question, write a list of all relevant sub-questions.Make sure this list is comprehensive and covers all parts of the original question.It's ok if there's redundancy in the sub-questions, it's better to cover all the bases than to miss some.Make sure the sub-questions are as narrowly focused as possible in order to get the most relevant results.`;const searchSchema = z.object({ query: z .string() .describe("Primary similarity search query applied to video transcripts."), subQueries: z.array(z.string()).optional().describe(subQueriesDescription), publishYear: z.number().optional().describe("Year video was published"),});
Query generation[](#query-generation "Direct link to Query generation")
------------------------------------------------------------------------
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Given a question, return a list of database queries optimized to retrieve the most relevant results.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["placeholder", "{examples}"], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
Let’s try out our query analyzer without any examples in the prompt:
await queryAnalyzer.invoke( "what's the difference between web voyager and reflection agents? do both use langgraph?");
{ query: "difference between Web Voyager and Reflection Agents", subQueries: [ "Do Web Voyager and Reflection Agents use LangGraph?" ]}
Adding examples and tuning the prompt[](#adding-examples-and-tuning-the-prompt "Direct link to Adding examples and tuning the prompt")
---------------------------------------------------------------------------------------------------------------------------------------
This works pretty well, but we probably want it to decompose the question even further to separate the queries about Web Voyager and Reflection Agents.
To tune our query generation results, we can add some examples of inputs questions and gold standard output queries to our prompt.
const examples = [];
const question = "What's chat langchain, is it a langchain template?";const query = { query: "What is chat langchain and is it a langchain template?", subQueries: ["What is chat langchain", "What is a langchain template"],};examples.push({ input: question, toolCalls: [query] });
1
const question = "How to build multi-agent system and stream intermediate steps from it";const query = { query: "How to build multi-agent system and stream intermediate steps from it", subQueries: [ "How to build multi-agent system", "How to stream intermediate steps from multi-agent system", "How to stream intermediate steps", ],};examples.push({ input: question, toolCalls: [query] });
2
const question = "LangChain agents vs LangGraph?";const query = { query: "What's the difference between LangChain agents and LangGraph? How do you deploy them?", subQueries: [ "What are LangChain agents", "What is LangGraph", "How do you deploy LangChain agents", "How do you deploy LangGraph", ],};examples.push({ input: question, toolCalls: [query] });
3
Now we need to update our prompt template and chain so that the examples are included in each prompt. Since we’re working with LLM model function-calling, we’ll need to do a bit of extra structuring to send example inputs and outputs to the model. We’ll create a `toolExampleToMessages` helper function to handle this for us:
import { AIMessage, BaseMessage, HumanMessage, SystemMessage, ToolMessage,} from "@langchain/core/messages";import { v4 as uuidV4 } from "uuid";const toolExampleToMessages = ( example: Example | Record<string, any>): Array<BaseMessage> => { const messages: Array<BaseMessage> = [ new HumanMessage({ content: example.input }), ]; const openaiToolCalls = example.toolCalls.map((toolCall) => { return { id: uuidV4(), type: "function" as const, function: { name: "search", arguments: JSON.stringify(toolCall), }, }; }); messages.push( new AIMessage({ content: "", additional_kwargs: { tool_calls: openaiToolCalls }, }) ); const toolOutputs = "toolOutputs" in example ? example.toolOutputs : Array(openaiToolCalls.length).fill( "You have correctly called this tool." ); toolOutputs.forEach((output, index) => { messages.push( new ToolMessage({ content: output, tool_call_id: openaiToolCalls[index].id, }) ); }); return messages;};const exampleMessages = examples.map((ex) => toolExampleToMessages(ex)).flat();
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";const queryAnalyzerWithExamples = RunnableSequence.from([ { question: new RunnablePassthrough(), examples: () => exampleMessages, }, prompt, llmWithTools,]);
await queryAnalyzerWithExamples.invoke( "what's the difference between web voyager and reflection agents? do both use langgraph?");
{ query: "Difference between Web Voyager and Reflection agents, do they both use LangGraph?", subQueries: [ "Difference between Web Voyager and Reflection agents", "Do Web Voyager and Reflection agents use LangGraph" ]}
Thanks to our examples we get a slightly more decomposed search query. With some more prompt engineering and tuning of our examples we could improve query generation even more.
You can see that the examples are passed to the model as messages in the [LangSmith trace](https://smith.langchain.com/public/102829c3-69fc-4cb7-b28b-399ae2c9c008/r).
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned some techniques for combining few-shotting with query analysis.
Next, check out some of the other query analysis guides in this section, like [how to deal with high cardinality data](/v0.2/docs/how_to/query_high_cardinality).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to construct filters
](/v0.2/docs/how_to/query_constructing_filters)[
Next
How to deal with high cardinality categorical variables
](/v0.2/docs/how_to/query_high_cardinality)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Set environment variables](#set-environment-variables)
* [Query schema](#query-schema)
* [Query generation](#query-generation)
* [Adding examples and tuning the prompt](#adding-examples-and-tuning-the-prompt)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/query_constructing_filters | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to construct filters
On this page
How to construct filters
========================
Prerequisites
This guide assumes familiarity with the following:
* [Query analysis](/v0.2/docs/tutorials/query_analysis)
We may want to do query analysis to extract filters to pass into retrievers. One way we ask the LLM to represent these filters is as a Zod schema. There is then the issue of converting that Zod schema into a filter that can be passed into a retriever.
This can be done manually, but LangChain also provides some “Translators” that are able to translate from a common syntax into filters specific to each retriever. Here, we will cover how to use those translators.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
* npm
* yarn
* pnpm
npm i zod
yarn add zod
pnpm add zod
In this example, `year` and `author` are both attributes to filter on.
import { z } from "zod";const searchSchema = z.object({ query: z.string(), startYear: z.number().optional(), author: z.string().optional(),});const searchQuery: z.infer<typeof searchSchema> = { query: "RAG", startYear: 2022, author: "LangChain",};
import { Comparison, Comparator } from "langchain/chains/query_constructor/ir";function constructComparisons( query: z.infer<typeof searchSchema>): Comparison[] { const comparisons: Comparison[] = []; if (query.startYear !== undefined) { comparisons.push( new Comparison("gt" as Comparator, "start_year", query.startYear) ); } if (query.author !== undefined) { comparisons.push( new Comparison("eq" as Comparator, "author", query.author) ); } return comparisons;}const comparisons = constructComparisons(searchQuery);
import { Operation, Operator } from "langchain/chains/query_constructor/ir";const _filter = new Operation("and" as Operator, comparisons);
import { ChromaTranslator } from "langchain/retrievers/self_query/chroma";new ChromaTranslator().visitOperation(_filter);
{ "$and": [ { start_year: { "$gt": 2022 } }, { author: { "$eq": "LangChain" } } ]}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to create a specific filter from an arbitrary query.
Next, check out some of the other query analysis guides in this section, like [how to use few-shotting to improve performance](/v0.2/docs/how_to/query_no_queries).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream from a question-answering chain
](/v0.2/docs/how_to/qa_streaming)[
Next
How to add examples to the prompt
](/v0.2/docs/how_to/query_few_shot)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/query_high_cardinality | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to deal with high cardinality categorical variables
On this page
How to deal with high cardinality categorical variables
=======================================================
Prerequisites
This guide assumes familiarity with the following:
* [Query analysis](/v0.2/docs/tutorials/query_analysis)
High cardinality data refers to columns in a dataset that contain a large number of unique values. This guide demonstrates some techniques for dealing with these inputs.
For example, you may want to do query analysis to create a filter on a categorical column. One of the difficulties here is that you usually need to specify the EXACT categorical value. The issue is you need to make sure the LLM generates that categorical value exactly. This can be done relatively easy with prompting when there are only a few values that are valid. When there are a high number of valid values then it becomes more difficult, as those values may not fit in the LLM context, or (if they do) there may be too many for the LLM to properly attend to.
In this notebook we take a look at how to approach this.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community zod @faker-js/faker
yarn add @langchain/community zod @faker-js/faker
pnpm add @langchain/community zod @faker-js/faker
### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
#### Set up data[](#set-up-data "Direct link to Set up data")
We will generate a bunch of fake names
import { faker } from "@faker-js/faker";const names = Array.from({ length: 10000 }, () => faker.person.fullName());
Let’s look at some of the names
names[0];
"Rolando Wilkinson"
names[567];
"Homer Harber"
Query Analysis[](#query-analysis "Direct link to Query Analysis")
------------------------------------------------------------------
We can now set up a baseline query analysis
import { z } from "zod";const searchSchema = z.object({ query: z.string(), author: z.string(),});
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `Generate a relevant search query for a library system`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that if we spell the name exactly correctly, it knows how to handle it
await queryAnalyzer.invoke("what are books about aliens by Jesse Knight");
{ query: "aliens", author: "Jesse Knight" }
The issue is that the values you want to filter on may NOT be spelled exactly correctly
await queryAnalyzer.invoke("what are books about aliens by jess knight");
{ query: "books about aliens", author: "jess knight" }
### Add in all values[](#add-in-all-values "Direct link to Add in all values")
One way around this is to add ALL possible values to the prompt. That will generally guide the query in the right direction
const system = `Generate a relevant search query for a library system using the 'search' tool.The 'author' you return to the user MUST be one of the following authors:{authors}Do NOT hallucinate author name!`;const basePrompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const prompt = await basePrompt.partial({ authors: names.join(", ") });const queryAnalyzerAll = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
However… if the list of categoricals is long enough, it may error!
try { const res = await queryAnalyzerAll.invoke( "what are books about aliens by jess knight" );} catch (e) { console.error(e);}
Error: 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 50197 tokens (50167 in the messages, 30 in the functions). Please reduce the length of the messages or functions. at Function.generate (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/error.mjs:41:20) at OpenAI.makeStatusError (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/core.mjs:256:25) at OpenAI.makeRequest (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/openai/4.47.1/core.mjs:299:30) at eventLoopTick (ext:core/01_core.js:63:7) at async file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/@langchain/openai/0.0.31/dist/chat_models.js:756:29 at async RetryOperation._fn (file:///Users/jacoblee/Library/Caches/deno/npm/registry.npmjs.org/p-retry/4.6.2/index.js:50:12) { status: 400, headers: { "alt-svc": 'h3=":443"; ma=86400', "cf-cache-status": "DYNAMIC", "cf-ray": "885f794b3df4fa52-SJC", "content-length": "340", "content-type": "application/json", date: "Sat, 18 May 2024 23:02:16 GMT", "openai-organization": "langchain", "openai-processing-ms": "230", "openai-version": "2020-10-01", server: "cloudflare", "set-cookie": "_cfuvid=F_c9lnRuQDUhKiUE2eR2PlsxHPldf1OAVMonLlHTjzM-1716073336256-0.0.1.1-604800000; path=/; domain="... 48 more characters, "strict-transport-security": "max-age=15724800; includeSubDomains", "x-ratelimit-limit-requests": "10000", "x-ratelimit-limit-tokens": "2000000", "x-ratelimit-remaining-requests": "9999", "x-ratelimit-remaining-tokens": "1958402", "x-ratelimit-reset-requests": "6ms", "x-ratelimit-reset-tokens": "1.247s", "x-request-id": "req_7b88677d6883fac1520e44543f68c839" }, request_id: "req_7b88677d6883fac1520e44543f68c839", error: { message: "This model's maximum context length is 16385 tokens. However, your messages resulted in 50197 tokens"... 101 more characters, type: "invalid_request_error", param: "messages", code: "context_length_exceeded" }, code: "context_length_exceeded", param: "messages", type: "invalid_request_error", attemptNumber: 1, retriesLeft: 6}
We can try to use a longer context window… but with so much information in there, it is not guaranteed to pick it up reliably
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llmLong = new ChatOpenAI({ model: "gpt-4-turbo-preview" });
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llmLong = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llmLong = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llmLong = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llmLong = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llmLong = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
const structuredLlmLong = llmLong.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzerAll = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, structuredLlmLong,]);
await queryAnalyzerAll.invoke("what are books about aliens by jess knight");
{ query: "aliens", author: "jess knight" }
### Find and all relevant values[](#find-and-all-relevant-values "Direct link to Find and all relevant values")
Instead, what we can do is create a [vector store index](/v0.2/docs/concepts#vectorstores) over the relevant values and then query that for the N most relevant values,
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small",});const vectorstore = await MemoryVectorStore.fromTexts(names, {}, embeddings);const selectNames = async (question: string) => { const _docs = await vectorstore.similaritySearch(question, 10); const _names = _docs.map((d) => d.pageContent); return _names.join(", ");};const createPrompt = RunnableSequence.from([ { question: new RunnablePassthrough(), authors: selectNames, }, basePrompt,]);await createPrompt.invoke("what are books by jess knight");
ChatPromptValue { lc_serializable: true, lc_kwargs: { messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "what are books by jess knight", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what are books by jess knight", name: undefined, additional_kwargs: {}, response_metadata: {} } ] }, lc_namespace: [ "langchain_core", "prompt_values" ], messages: [ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Generate a relevant search query for a library system using the 'search' tool.\n" + "\n" + "The 'author' you ret"... 243 more characters, name: undefined, additional_kwargs: {}, response_metadata: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "what are books by jess knight", additional_kwargs: {}, response_metadata: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "what are books by jess knight", name: undefined, additional_kwargs: {}, response_metadata: {} } ]}
const queryAnalyzerSelect = createPrompt.pipe(llmWithTools);await queryAnalyzerSelect.invoke("what are books about aliens by jess knight");
{ query: "aliens", author: "Jess Knight" }
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned how to deal with high cardinality data when constructing queries.
Next, check out some of the other query analysis guides in this section, like [how to use few-shotting to improve performance](/v0.2/docs/how_to/query_no_queries).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to add examples to the prompt
](/v0.2/docs/how_to/query_few_shot)[
Next
How to handle multiple queries
](/v0.2/docs/how_to/query_multiple_queries)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Set environment variables](#set-environment-variables)
* [Query Analysis](#query-analysis)
* [Add in all values](#add-in-all-values)
* [Find and all relevant values](#find-and-all-relevant-values)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/query_multiple_queries | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle multiple queries
On this page
How to handle multiple queries
==============================
Prerequisites
This guide assumes familiarity with the following:
* [Query analysis](/v0.2/docs/tutorials/query_analysis)
Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community @langchain/openai zod chromadb
yarn add @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/community @langchain/openai zod chromadb
### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Create Index[](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho", "Ankush worked at Facebook"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "multi_query",});const retriever = vectorstore.asRetriever(1);
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. We will let it return multiple queries.
import { z } from "zod";const searchSchema = z .object({ queries: z.array(z.string()).describe("Distinct queries to search for"), }) .describe("Search over a database of job records.");
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.If you need to look up two distinct pieces of information, you are allowed to do that!`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that this allows for creating multiple queries
await queryAnalyzer.invoke("where did Harrison Work");
{ queries: [ "Harrison" ] }
await queryAnalyzer.invoke("where did Harrison and ankush Work");
{ queries: [ "Harrison work", "Ankush work" ] }
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time.
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const docs = []; for (const query of response.queries) { const newDocs = await retriever.invoke(query, config); docs.push(...newDocs); } // You probably want to think about reranking or deduplicating documents here // But that is a separate topic return docs;};const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("where did Harrison and ankush Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} }, Document { pageContent: "Ankush worked at Facebook", metadata: {} }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned some techniques for handling multiple queries in a query analysis system.
Next, check out some of the other query analysis guides in this section, like [how to deal with cases where no query is generated](/v0.2/docs/how_to/query_no_queries).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to deal with high cardinality categorical variables
](/v0.2/docs/how_to/query_high_cardinality)[
Next
How to handle multiple retrievers
](/v0.2/docs/how_to/query_multiple_retrievers)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Set environment variables](#set-environment-variables)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/query_multiple_retrievers | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle multiple retrievers
On this page
How to handle multiple retrievers
=================================
Prerequisites
This guide assumes familiarity with the following:
* [Query analysis](/v0.2/docs/tutorials/query_analysis)
Sometimes, a query analysis technique may allow for selection of which retriever to use. To use this, you will need to add some logic to select the retriever to do. We will show a simple example (using mock data) of how to do that.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community @langchain/openai zod chromadb
yarn add @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/community @langchain/openai zod chromadb
### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Create Index[](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison",});const retrieverHarrison = vectorstore.asRetriever(1);const texts = ["Ankush worked at Facebook"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "ankush",});const retrieverAnkush = vectorstore.asRetriever(1);
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. We will let it return multiple queries.
import { z } from "zod";const searchSchema = z.object({ query: z.string().describe("Query to look up"), person: z .string() .describe( "Person to look things up for. Should be `HARRISON` or `ANKUSH`." ),});
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that this allows for routing between retrievers
await queryAnalyzer.invoke("where did Harrison Work");
{ query: "workplace of Harrison", person: "HARRISON" }
await queryAnalyzer.invoke("where did ankush Work");
{ query: "Workplace of Ankush", person: "ANKUSH" }
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? We just need some simple logic to select the retriever and pass in the search query
const retrievers = { HARRISON: retrieverHarrison, ANKUSH: retrieverAnkush,};
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const retriever = retrievers[response.person]; return retriever.invoke(response.query, config);};const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("where did ankush Work");
[ Document { pageContent: "Ankush worked at Facebook", metadata: {} } ]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned some techniques for handling multiple retrievers in a query analysis system.
Next, check out some of the other query analysis guides in this section, like [how to deal with cases where no query is generated](/v0.2/docs/how_to/query_no_queries).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle multiple queries
](/v0.2/docs/how_to/query_multiple_queries)[
Next
How to handle cases where no queries are generated
](/v0.2/docs/how_to/query_no_queries)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Set environment variables](#set-environment-variables)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/query_no_queries | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to handle cases where no queries are generated
On this page
How to handle cases where no queries are generated
==================================================
Prerequisites
This guide assumes familiarity with the following:
* [Query analysis](/v0.2/docs/tutorials/query_analysis)
Sometimes, a query analysis technique may allow for any number of queries to be generated - including no queries! In this case, our overall chain will need to inspect the result of the query analysis before deciding whether to call the retriever or not.
We will use mock data for this example.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community @langchain/openai zod chromadb
yarn add @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/community @langchain/openai zod chromadb
### Set environment variables[](#set-environment-variables "Direct link to Set environment variables")
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Create Index[](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison",});const retriever = vectorstore.asRetriever(1);
Query analysis[](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. However, we will configure the LLM such that is doesn’t NEED to call the function representing a search query (should it decide not to). We will also then use a prompt to do query analysis that explicitly lays when it should and shouldn’t make a search.
import { z } from "zod";const searchSchema = z.object({ query: z.string().describe("Similarity search query applied to job record."),});
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const llm = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const llm = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
import { zodToJsonSchema } from "zod-to-json-schema";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.You do not NEED to look things up. If you don't need to, then just respond normally.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.bind({ tools: [ { type: "function" as const, function: { name: "search", description: "Search over a database of job records.", parameters: zodToJsonSchema(searchSchema), }, }, ],});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that by invoking this we get an message that sometimes - but not always - returns a tool call.
await queryAnalyzer.invoke("where did Harrison work");
AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_uqHm5OMbXBkmqDr7Xzj8EMmd", type: "function", function: [Object] } ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_uqHm5OMbXBkmqDr7Xzj8EMmd", type: "function", function: { name: "search", arguments: '{"query":"Harrison"}' } } ] }}
await queryAnalyzer.invoke("hi!");
AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello! How can I assist you today?", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! How can I assist you today?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }}
Retrieval with query analysis[](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? Let’s look at an example below.
import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";const outputParser = new JsonOutputKeyToolsParser({ keyName: "search",});
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); if ( "tool_calls" in response.additional_kwargs && response.additional_kwargs.tool_calls !== undefined ) { const query = await outputParser.invoke(response, config); return retriever.invoke(query[0].query, config); } else { return response; }};const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("hi!");
AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello! How can I assist you today?", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! How can I assist you today?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }}
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned some techniques for handling irrelevant questions in query analysis systems.
Next, check out some of the other query analysis guides in this section, like [how to use few-shot examples](/v0.2/docs/how_to/query_few_shot).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle multiple retrievers
](/v0.2/docs/how_to/query_multiple_retrievers)[
Next
How to recursively split text by characters
](/v0.2/docs/how_to/recursive_text_splitter)
* [Setup](#setup)
* [Install dependencies](#install-dependencies)
* [Set environment variables](#set-environment-variables)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/recursive_text_splitter | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to recursively split text by characters
On this page
How to recursively split text by characters
===========================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Text splitters](/v0.2/docs/concepts#text-splitters)
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `["\n\n", "\n", " ", ""]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
1. How the text is split: by list of characters.
2. How the chunk size is measured: by number of characters.
Below we show example usage.
To obtain the string content directly, use `.splitText`.
To create LangChain [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) objects (e.g., for use in downstream tasks), use `.createDocuments`.
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.This is a weird text to write, but gotta test the splittingggg some how.\n\nBye!\n\n-H.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 1,});const output = await splitter.createDocuments([text]);console.log(output.slice(0, 3));
[ Document { pageContent: "Hi.", metadata: { loc: { lines: { from: 1, to: 1 } } } }, Document { pageContent: "I'm", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "Harrison.", metadata: { loc: { lines: { from: 3, to: 3 } } } }]
You’ll note that in the above example we are splitting a raw text string and getting back a list of documents. We can also split documents directly.
import { Document } from "@langchain/core/documents";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.This is a weird text to write, but gotta test the splittingggg some how.\n\nBye!\n\n-H.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 1,});const docOutput = await splitter.splitDocuments([ new Document({ pageContent: text }),]);console.log(docOutput.slice(0, 3));
[ Document { pageContent: "Hi.", metadata: { loc: { lines: { from: 1, to: 1 } } } }, Document { pageContent: "I'm", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "Harrison.", metadata: { loc: { lines: { from: 3, to: 3 } } } }]
You can customize the `RecursiveCharacterTextSplitter` with arbitrary separators by passing a `separators` parameter like this:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { Document } from "@langchain/core/documents";const text = `Some other considerations include:- Do you deploy your backend and frontend together, or separately?- Do you deploy your backend co-located with your database, or separately?**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.## Deployment OptionsSee below for a list of deployment options for your LangChain app. If you don't see your preferred option, please get in touch and we can add it to this list.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 50, chunkOverlap: 1, separators: ["|", "##", ">", "-"],});const docOutput = await splitter.splitDocuments([ new Document({ pageContent: text }),]);console.log(docOutput.slice(0, 3));
[ Document { pageContent: "Some other considerations include:", metadata: { loc: { lines: { from: 1, to: 1 } } } }, Document { pageContent: "- Do you deploy your backend and frontend together", metadata: { loc: { lines: { from: 3, to: 3 } } } }, Document { pageContent: "r, or separately?", metadata: { loc: { lines: { from: 3, to: 3 } } } }]
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned a method for splitting text by character.
Next, check out [specific techinques for splitting on code](/v0.2/docs/how_to/code_splitter) or the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to handle cases where no queries are generated
](/v0.2/docs/how_to/query_no_queries)[
Next
How to reduce retrieval latency
](/v0.2/docs/how_to/reduce_retrieval_latency)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/reduce_retrieval_latency | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to reduce retrieval latency
On this page
How to reduce retrieval latency
===============================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Embeddings](/v0.2/docs/concepts/#embedding-models)
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
One way to reduce retrieval latency is through a technique called "Adaptive Retrieval". The [`MatryoshkaRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_matryoshka_retriever.MatryoshkaRetriever.html) uses the Matryoshka Representation Learning (MRL) technique to retrieve documents for a given query in two steps:
* **First-pass**: Uses a lower dimensional sub-vector from the MRL embedding for an initial, fast, but less accurate search.
* **Second-pass**: Re-ranks the top results from the first pass using the full, high-dimensional embedding for higher accuracy.
![Matryoshka Retriever](/v0.2/assets/images/adaptive_retrieval-2abb9f6f280c11a424ae6978d39eb011.png)
It is based on this [Supabase](https://supabase.com/) blog post ["Matryoshka embeddings: faster OpenAI vector search using Adaptive Retrieval"](https://supabase.com/blog/matryoshka-embeddings).
### Setup[](#setup "Direct link to Setup")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
To follow the example below, you need an OpenAI API key:
export OPENAI_API_KEY=your-api-key
We'll also be using `chroma` for our vector store. Follow the instructions [here](/v0.2/docs/integrations/vectorstores/chroma) to setup.
import { MatryoshkaRetriever } from "langchain/retrievers/matryoshka_retriever";import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { faker } from "@faker-js/faker";const smallEmbeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small", dimensions: 512, // Min number for small});const largeEmbeddings = new OpenAIEmbeddings({ model: "text-embedding-3-large", dimensions: 3072, // Max number for large});const vectorStore = new Chroma(smallEmbeddings, { numDimensions: 512,});const retriever = new MatryoshkaRetriever({ vectorStore, largeEmbeddingModel: largeEmbeddings, largeK: 5,});const irrelevantDocs = Array.from({ length: 250 }).map( () => new Document({ pageContent: faker.lorem.word(7), // Similar length to the relevant docs }));const relevantDocs = [ new Document({ pageContent: "LangChain is an open source github repo", }), new Document({ pageContent: "There are JS and PY versions of the LangChain github repos", }), new Document({ pageContent: "LangGraph is a new open source library by the LangChain team", }), new Document({ pageContent: "LangChain announced GA of LangSmith last week!", }), new Document({ pageContent: "I heart LangChain", }),];const allDocs = [...irrelevantDocs, ...relevantDocs];/** * IMPORTANT: * The `addDocuments` method on `MatryoshkaRetriever` will * generate the small AND large embeddings for all documents. */await retriever.addDocuments(allDocs);const query = "What is LangChain?";const results = await retriever.invoke(query);console.log(results.map(({ pageContent }) => pageContent).join("\n"));/** I heart LangChain LangGraph is a new open source library by the LangChain team LangChain is an open source github repo LangChain announced GA of LangSmith last week! There are JS and PY versions of the LangChain github repos*/
#### API Reference:
* [MatryoshkaRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_matryoshka_retriever.MatryoshkaRetriever.html) from `langchain/retrievers/matryoshka_retriever`
* [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
note
Due to the constraints of some vector stores, the large embedding metadata field is stringified (`JSON.stringify`) before being stored. This means that the metadata field will need to be parsed (`JSON.parse`) when retrieved from the vector store.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned a technique that can help speed up your retrieval queries.
Next, check out the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to recursively split text by characters
](/v0.2/docs/how_to/recursive_text_splitter)[
Next
How to route execution within a chain
](/v0.2/docs/how_to/routing)
* [Setup](#setup)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/routing | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to route execution within a chain
On this page
How to route execution within a chain
=====================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
* [Configuring chain parameters at runtime](/v0.2/docs/how_to/binding)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Chat Messages](/v0.2/docs/concepts/#message-types)
This guide covers how to do routing in the LangChain Expression Language.
Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs.
There are two ways to perform routing:
1. Conditionally return runnables from a [`RunnableLambda`](/v0.2/docs/how_to/functions) (recommended)
2. Using a `RunnableBranch` (legacy)
We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about LangChain, Anthropic, or Other, then routes to a corresponding prompt chain.
Using a custom function[](#using-a-custom-function "Direct link to Using a custom function")
---------------------------------------------------------------------------------------------
You can use a custom function to route between different outputs. Here's an example:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const promptTemplate = ChatPromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`. Do not respond with more than one word.<question>{question}</question>Classification:`);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const classificationChain = RunnableSequence.from([ promptTemplate, model, new StringOutputParser(),]);const classificationChainResult = await classificationChain.invoke({ question: "how do I call Anthropic?",});console.log(classificationChainResult);/* Anthropic*/const langChainChain = ChatPromptTemplate.fromTemplate( `You are an expert in langchain.Always answer questions starting with "As Harrison Chase told me".Respond to the following question:Question: {question}Answer:`).pipe(model);const anthropicChain = ChatPromptTemplate.fromTemplate( `You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:`).pipe(model);const generalChain = ChatPromptTemplate.fromTemplate( `Respond to the following question:Question: {question}Answer:`).pipe(model);const route = ({ topic }: { input: string; topic: string }) => { if (topic.toLowerCase().includes("anthropic")) { return anthropicChain; } else if (topic.toLowerCase().includes("langchain")) { return langChainChain; } else { return generalChain; }};const fullChain = RunnableSequence.from([ { topic: classificationChain, question: (input: { question: string }) => input.question, }, route,]);const result1 = await fullChain.invoke({ question: "how do I use Anthropic?",});console.log(result1);/* AIMessage { content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' + '\n' + "First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" + '\n' + "Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" + '\n' + "You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" + '\n' + 'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' + '\n' + 'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.', additional_kwargs: {} }*/const result2 = await fullChain.invoke({ question: "how do I use LangChain?",});console.log(result2);/* AIMessage { content: ' As Harrison Chase told me, here is how you use LangChain:\n' + '\n' + 'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' + '\n' + 'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' + '\n' + 'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' + '\n' + "Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" + '\n' + 'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' + '\n' + 'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.', additional_kwargs: {} }*/const result3 = await fullChain.invoke({ question: "what is 2 + 2?",});console.log(result3);/* AIMessage { content: ' 4', additional_kwargs: {} }*/
#### API Reference:
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Using a RunnableBranch[](#using-a-runnablebranch "Direct link to Using a RunnableBranch")
------------------------------------------------------------------------------------------
A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input.
If no provided conditions match, it runs the default runnable.
Here's an example of what it looks like in action:
import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableBranch, RunnableSequence } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const promptTemplate = ChatPromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`. Do not respond with more than one word.<question>{question}</question>Classification:`);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const classificationChain = RunnableSequence.from([ promptTemplate, model, new StringOutputParser(),]);const classificationChainResult = await classificationChain.invoke({ question: "how do I call Anthropic?",});console.log(classificationChainResult);/* Anthropic*/const langChainChain = ChatPromptTemplate.fromTemplate( `You are an expert in langchain.Always answer questions starting with "As Harrison Chase told me".Respond to the following question:Question: {question}Answer:`).pipe(model);const anthropicChain = ChatPromptTemplate.fromTemplate( `You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:`).pipe(model);const generalChain = ChatPromptTemplate.fromTemplate( `Respond to the following question:Question: {question}Answer:`).pipe(model);const branch = RunnableBranch.from([ [ (x: { topic: string; question: string }) => x.topic.toLowerCase().includes("anthropic"), anthropicChain, ], [ (x: { topic: string; question: string }) => x.topic.toLowerCase().includes("langchain"), langChainChain, ], generalChain,]);const fullChain = RunnableSequence.from([ { topic: classificationChain, question: (input: { question: string }) => input.question, }, branch,]);const result1 = await fullChain.invoke({ question: "how do I use Anthropic?",});console.log(result1);/* AIMessage { content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' + '\n' + "First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" + '\n' + "Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" + '\n' + "You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" + '\n' + 'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' + '\n' + 'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.', additional_kwargs: {} }*/const result2 = await fullChain.invoke({ question: "how do I use LangChain?",});console.log(result2);/* AIMessage { content: ' As Harrison Chase told me, here is how you use LangChain:\n' + '\n' + 'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' + '\n' + 'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' + '\n' + 'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' + '\n' + "Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" + '\n' + 'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' + '\n' + 'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.', additional_kwargs: {} }*/const result3 = await fullChain.invoke({ question: "what is 2 + 2?",});console.log(result3);/* AIMessage { content: ' 4', additional_kwargs: {} }*/
#### API Reference:
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableBranch](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableBranch.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to add routing to your composed LCEL chains.
Next, check out the other [how-to guides on runnables](/v0.2/docs/how_to/#langchain-expression-language-lcel) in this section.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to reduce retrieval latency
](/v0.2/docs/how_to/reduce_retrieval_latency)[
Next
How to chain runnables
](/v0.2/docs/how_to/sequence)
* [Using a custom function](#using-a-custom-function)
* [Using a RunnableBranch](#using-a-runnablebranch)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/sequence | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to chain runnables
On this page
How to chain runnables
======================
One point about [LangChain Expression Language](/v0.2/docs/concepts/#langchain-expression-language) is that any two runnables can be “chained” together into sequences. The output of the previous runnable’s `.invoke()` call is passed as input to the next runnable. This can be done using the `.pipe()` method.
The resulting [`RunnableSequence`](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) is itself a runnable, which means it can be invoked, streamed, or further chained just like any other runnable. Advantages of chaining runnables in this way are efficient streaming (the sequence will stream output as soon as it is available), and debugging and tracing with tools like [LangSmith](/v0.2/docs/how_to/debugging).
Prerequisites
This guide assumes familiarity with the following concepts:
* [LangChain Expression Language (LCEL)](/v0.2/docs/concepts/#langchain-expression-language)
* [Prompt templates](/v0.2/docs/concepts/#prompt-templates)
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [Output parser](/v0.2/docs/concepts/#output-parsers)
The pipe method[](#the-pipe-method "Direct link to The pipe method")
---------------------------------------------------------------------
To show off how this works, let’s go through an example. We’ll walk through a common pattern in LangChain: using a [prompt template](/v0.2/docs/concepts#prompt-templates) to format input into a [chat model](/v0.2/docs/concepts/#chat-models), and finally converting the chat message output into a string with an \[output parser\](/docs/concepts#output-parsers.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
* Groq
* VertexAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
#### Add environment variables
GROQ_API_KEY=your-api-key
#### Instantiate the model
import { ChatGroq } from "@langchain/groq";const model = new ChatGroq({ model: "mixtral-8x7b-32768", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
#### Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
#### Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";const model = new ChatVertexAI({ model: "gemini-1.5-pro", temperature: 0});
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromTemplate("tell me a joke about {topic}");const chain = prompt.pipe(model).pipe(new StringOutputParser());
Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable:
await chain.invoke({ topic: "bears" });
"Here's a bear joke for you:\n\nWhy did the bear dissolve in water?\nBecause it was a polar bear!"
### Coercion[](#coercion "Direct link to Coercion")
We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components.
For example, let’s say we wanted to compose the joke generating chain with another chain that evaluates whether or not the generated joke was funny.
We would need to be careful with how we format the input into the next chain. In the below example, the dict in the chain is automatically parsed and converted into a [`RunnableParallel`](/v0.2/docs/how_to/parallel), which runs all of its values in parallel and returns a dict with the results.
This happens to be the same format the next prompt template expects. Here it is in action:
import { RunnableLambda } from "@langchain/core/runnables";const analysisPrompt = ChatPromptTemplate.fromTemplate( "is this a funny joke? {joke}");const composedChain = new RunnableLambda({ func: async (input) => { const result = await chain.invoke(input); return { joke: result }; },}) .pipe(analysisPrompt) .pipe(model) .pipe(new StringOutputParser());await composedChain.invoke({ topic: "bears" });
'Haha, that\'s a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing. I appreciate a good pun or wordplay joke.'
Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:
import { RunnableSequence } from "@langchain/core/runnables";const composedChainWithLambda = RunnableSequence.from([ chain, (input) => ({ joke: input }), analysisPrompt, model, new StringOutputParser(),]);await composedChainWithLambda.invoke({ topic: "beets" });
"Haha, that's a cute and punny joke! I like how it plays on the idea of beets blushing or turning red like someone blushing. Food puns can be quite amusing. While not a total knee-slapper, it's a light-hearted, groan-worthy dad joke that would make me chuckle and shake my head. Simple vegetable humor!"
> See the LangSmith trace for the run above [here](https://smith.langchain.com/public/ef1bf347-a243-4da6-9be6-54f5d73e6da2/r)
However, keep in mind that using functions like this may interfere with operations like streaming. See [this section](/v0.2/docs/how_to/functions) for more information.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You now know some ways to chain two runnables together.
To learn more, see the other how-to guides on runnables in this section.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to route execution within a chain
](/v0.2/docs/how_to/routing)[
Next
How to split text by tokens
](/v0.2/docs/how_to/split_by_token)
* [The pipe method](#the-pipe-method)
* [Coercion](#coercion)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/split_by_token | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to split text by tokens
On this page
How to split text by tokens
===========================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Text splitters](/v0.2/docs/concepts#text-splitters)
Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model.
`js-tiktoken`[](#js-tiktoken "Direct link to js-tiktoken")
-----------------------------------------------------------
note
[js-tiktoken](https://github.com/openai/js-tiktoken) is a JavaScript version of the `BPE` tokenizer created by OpenAI.
We can use `js-tiktoken` to estimate tokens used. It is tuned to OpenAI models.
1. How the text is split: by character passed in.
2. How the chunk size is measured: by the `js-tiktoken` tokenizer.
You can use the [`TokenTextSplitter`](https://v02.api.js.langchain.com/classes/langchain_textsplitters.TokenTextSplitter.html) like this:
import { TokenTextSplitter } from "@langchain/textsplitters";import * as fs from "node:fs";// Load an example documentconst rawData = await fs.readFileSync( "../../../../examples/state_of_the_union.txt");const stateOfTheUnion = rawData.toString();const textSplitter = new TokenTextSplitter({ chunkSize: 10, chunkOverlap: 0,});const texts = await textSplitter.splitText(stateOfTheUnion);console.log(texts[0]);
Madam Speaker, Madam Vice President, our
**Note:** Some written languages (e.g. Chinese and Japanese) have characters which encode to 2 or more tokens. Using the `TokenTextSplitter` directly can split the tokens for a character between two chunks causing malformed Unicode characters.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You’ve now learned a method for splitting text based on token count.
Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to chain runnables
](/v0.2/docs/how_to/sequence)[
Next
How to deal with large databases
](/v0.2/docs/how_to/sql_large_db)
* [`js-tiktoken`](#js-tiktoken)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/sql_large_db | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to deal with large databases
On this page
How to deal with large databases
================================
Prerequisites
This guide assumes familiarity with the following:
* [Question answering over SQL data](/v0.2/docs/tutorials/sql_qa)
In order to write valid queries against a database, we need to feed the model the table names, table schemas, and feature values for it to query over. When there are many tables, columns, and/or high-cardinality columns, it becomes impossible for us to dump the full information about our database in every prompt. Instead, we must find ways to dynamically insert into the prompt only the most relevant information. Let's take a look at some techniques for doing this.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, install the required packages and set your environment variables. This example will use OpenAI as the LLM.
npm install langchain @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Many tables[](#many-tables "Direct link to Many tables")
---------------------------------------------------------
One of the main pieces of information we need to include in our prompt is the schemas of the relevant tables. When we have very many tables, we can't fit all of the schemas in a single prompt. What we can do in such cases is first extract the names of the tables related to the user input, and then include only their schemas.
One easy and reliable way to do this is using OpenAI function-calling and Zod models. LangChain comes with a built-in `createExtractionChainZod` chain that lets us do just this:
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { z } from "zod";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const Table = z.object({ names: z.array(z.string()).describe("Names of tables in SQL database"),});const tableNames = db.allTables.map((t) => t.tableName).join("\n");const system = `Return the names of ALL the SQL tables that MIGHT be relevant to the user question.The tables are:${tableNames}Remember to include ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{input}"],]);const tableChain = prompt.pipe(llm.withStructuredOutput(Table));console.log( await tableChain.invoke({ input: "What are all the genres of Alanis Morisette songs?", }));/**{ names: [ 'Artist', 'Track', 'Genre' ] } */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/5ca0c91e-4a40-44ef-8c45-9a4247dc474c/r// -------------/**This works pretty well! Except, as we’ll see below, we actually need a few other tables as well.This would be pretty difficult for the model to know based just on the user question.In this case, we might think to simplify our model’s job by grouping the tables together.We’ll just ask the model to choose between categories “Music” and “Business”, and then take care of selecting all the relevant tables from there: */const prompt2 = ChatPromptTemplate.fromMessages([ [ "system", `Return the names of the SQL tables that are relevant to the user question. The tables are: Music Business`, ], ["human", "{input}"],]);const categoryChain = prompt2.pipe(llm.withStructuredOutput(Table));console.log( await categoryChain.invoke({ input: "What are all the genres of Alanis Morisette songs?", }));/**{ names: [ 'Music' ] } */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/12b62e78-bfbe-42ff-86f2-ad738a476554/r// -------------const getTables = (categories: z.infer<typeof Table>): Array<string> => { let tables: Array<string> = []; for (const category of categories.names) { if (category === "Music") { tables = tables.concat([ "Album", "Artist", "Genre", "MediaType", "Playlist", "PlaylistTrack", "Track", ]); } else if (category === "Business") { tables = tables.concat([ "Customer", "Employee", "Invoice", "InvoiceLine", ]); } } return tables;};const tableChain2 = categoryChain.pipe(getTables);console.log( await tableChain2.invoke({ input: "What are all the genres of Alanis Morisette songs?", }));/**[ 'Album', 'Artist', 'Genre', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/e78c10aa-e923-4a24-b0c8-f7a6f5d316ce/r// -------------// Now that we’ve got a chain that can output the relevant tables for any query we can combine this with our createSqlQueryChain, which can accept a list of tableNamesToUse to determine which table schemas are included in the prompt:const queryChain = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const tableChain3 = RunnableSequence.from([ { input: (i: { question: string }) => i.question, }, tableChain2,]);const fullChain = RunnablePassthrough.assign({ tableNamesToUse: tableChain3,}).pipe(queryChain);const query = await fullChain.invoke({ question: "What are all the genres of Alanis Morisette songs?",});console.log(query);/**SELECT DISTINCT "Genre"."Name"FROM "Genre"JOIN "Track" ON "Genre"."GenreId" = "Track"."GenreId"JOIN "Album" ON "Track"."AlbumId" = "Album"."AlbumId"JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId"WHERE "Artist"."Name" = 'Alanis Morissette'LIMIT 5; */console.log(await db.run(query));/**[{"Name":"Rock"}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/c7d576d0-3462-40db-9edc-5492f10555bf/r// -------------// We might rephrase our question slightly to remove redundancy in the answerconst query2 = await fullChain.invoke({ question: "What is the set of all unique genres of Alanis Morisette songs?",});console.log(query2);/**SELECT DISTINCT Genre.Name FROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Alanis Morissette' */console.log(await db.run(query2));/**[{"Name":"Rock"}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/6e80087d-e930-4f22-9b40-f7edb95a2145/r// -------------
#### API Reference:
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
We've seen how to dynamically include a subset of table schemas in a prompt within a chain. Another possible approach to this problem is to let an Agent decide for itself when to look up tables by giving it a Tool to do so.
High-cardinality columns[](#high-cardinality-columns "Direct link to High-cardinality columns")
------------------------------------------------------------------------------------------------
High-cardinality refers to columns in a database that have a vast range of unique values. These columns are characterized by a high level of uniqueness in their data entries, such as individual names, addresses, or product serial numbers. High-cardinality data can pose challenges for indexing and querying, as it requires more sophisticated strategies to efficiently filter and retrieve specific entries.
In order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly.
One naive strategy it to create a vector store with all the distinct proper nouns that exist in the database. We can then query that vector store each user input and inject the most relevant proper nouns into the prompt.
First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:
import { DocumentInterface } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});async function queryAsList(database: any, query: string): Promise<string[]> { const res: Array<{ [key: string]: string }> = JSON.parse( await database.run(query) ) .flat() .filter((el: any) => el != null); const justValues: Array<string> = res.map((item) => Object.values(item)[0] .replace(/\b\d+\b/g, "") .trim() ); return justValues;}let properNouns: string[] = await queryAsList(db, "SELECT Name FROM Artist");properNouns = properNouns.concat( await queryAsList(db, "SELECT Title FROM Album"));properNouns = properNouns.concat( await queryAsList(db, "SELECT Name FROM Genre"));console.log(properNouns.length);/**647 */console.log(properNouns.slice(0, 5));/**[ 'AC/DC', 'Accept', 'Aerosmith', 'Alanis Morissette', 'Alice In Chains'] */// Now we can embed and store all of our values in a vector database:const vectorDb = await MemoryVectorStore.fromTexts( properNouns, {}, new OpenAIEmbeddings());const retriever = vectorDb.asRetriever(15);// And put together a query construction chain that first retrieves values from the database and inserts them into the prompt:const system = `You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than {top_k} rows.Here is the relevant table info: {table_info}Here is a non-exhaustive list of possible feature values.If filtering on a feature value make sure to check its spelling against this list first:{proper_nouns}`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{input}"],]);const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const queryChain = await createSqlQueryChain({ llm, db, prompt, dialect: "sqlite",});const retrieverChain = RunnableSequence.from([ (i: { question: string }) => i.question, retriever, (docs: Array<DocumentInterface>) => docs.map((doc) => doc.pageContent).join("\n"),]);const chain = RunnablePassthrough.assign({ proper_nouns: retrieverChain,}).pipe(queryChain);// To try out our chain, let’s see what happens when we try filtering on “elenis moriset”, a misspelling of Alanis Morissette, without and with retrieval:// Without retrievalconst query = await queryChain.invoke({ question: "What are all the genres of Elenis Moriset songs?", proper_nouns: "",});console.log("query", query);/**query SELECT DISTINCT Genre.NameFROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Elenis Moriset'LIMIT 5; */console.log("db query results", await db.run(query));/**db query results [] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/b153cb9b-6fbb-43a8-b2ba-4c86715183b9/r// -------------// With retrieval:const query2 = await chain.invoke({ question: "What are all the genres of Elenis Moriset songs?",});console.log("query2", query2);/**query2 SELECT DISTINCT Genre.NameFROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Alanis Morissette'; */console.log("db query results", await db.run(query2));/**db query results [{"Name":"Rock"}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/2f4f0e37-3b7f-47b5-837c-e2952489cac0/r// -------------
#### API Reference:
* [DocumentInterface](https://v02.api.js.langchain.com/interfaces/langchain_core_documents.DocumentInterface.html) from `@langchain/core/documents`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
We can see that with retrieval we're able to correct the spelling and get back a valid result.
Another possible approach to this problem is to let an Agent decide for itself when to look up proper nouns.
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned about some prompting strategies to improve SQL generation.
Next, check out some of the other guides in this section, like [how to validate queries](/v0.2/docs/how_to/sql_query_checking). You might also be interested in the query analysis guide [on handling high cardinality](/v0.2/docs/how_to/query_high_cardinality).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to split text by tokens
](/v0.2/docs/how_to/split_by_token)[
Next
How to use prompting to improve results
](/v0.2/docs/how_to/sql_prompting)
* [Setup](#setup)
* [Many tables](#many-tables)
* [High-cardinality columns](#high-cardinality-columns)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/sql_prompting | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use prompting to improve results
On this page
How to use prompting to improve results
=======================================
Prerequisites
This guide assumes familiarity with the following:
* [Question answering over SQL data](/v0.2/docs/tutorials/sql_qa)
In this guide we'll go over prompting strategies to improve SQL query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, install the required packages and set your environment variables. This example will use OpenAI as the LLM.
npm install @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Dialect-specific prompting[](#dialect-specific-prompting "Direct link to Dialect-specific prompting")
------------------------------------------------------------------------------------------------------
One of the simplest things we can do is make our prompt specific to the SQL dialect we're using. When using the built-in [`createSqlQueryChain`](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) and [`SqlDatabase`](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html), this is handled for you for any of the following dialects:
import { SQL_PROMPTS_MAP } from "langchain/chains/sql_db";console.log({ SQL_PROMPTS_MAP: Object.keys(SQL_PROMPTS_MAP) });/**{ SQL_PROMPTS_MAP: [ 'oracle', 'postgres', 'sqlite', 'mysql', 'mssql', 'sap hana' ]} */// For example, using our current DB we can see that we’ll get a SQLite-specific prompt:console.log({ sqlite: SQL_PROMPTS_MAP.sqlite,});/**{ sqlite: PromptTemplate { inputVariables: [ 'dialect', 'table_info', 'input', 'top_k' ], template: 'You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.\n' + 'Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.\n' + 'Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.\n' + 'Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n' + '\n' + 'Use the following format:\n' + '\n' + 'Question: "Question here"\n' + 'SQLQuery: "SQL Query to run"\n' + 'SQLResult: "Result of the SQLQuery"\n' + 'Answer: "Final answer here"\n' + '\n' + 'Only use the following tables:\n' + '{table_info}\n' + '\n' + 'Question: {input}', }} */
#### API Reference:
* [SQL\_PROMPTS\_MAP](https://v02.api.js.langchain.com/variables/langchain_chains_sql_db.SQL_PROMPTS_MAP.html) from `langchain/chains/sql_db`
Table definitions and example rows[](#table-definitions-and-example-rows "Direct link to Table definitions and example rows")
------------------------------------------------------------------------------------------------------------------------------
In basically any SQL chain, we'll need to feed the model at least part of the database schema. Without this it won't be able to write valid queries. Our database comes with some convenience methods to give us the relevant context. Specifically, we can get the table names, their schemas, and a sample of rows from each table:
import { db } from "../db.js";const context = await db.getTableInfo();console.log(context);/** CREATE TABLE Album ( AlbumId INTEGER NOT NULL, Title NVARCHAR(160) NOT NULL, ArtistId INTEGER NOT NULL)SELECT * FROM "Album" LIMIT 3; AlbumId Title ArtistId 1 For Those About To Rock We Salute You 1 2 Balls to the Wall 2 3 Restless and Wild 2CREATE TABLE Artist ( ArtistId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "Artist" LIMIT 3; ArtistId Name 1 AC/DC 2 Accept 3 AerosmithCREATE TABLE Customer ( CustomerId INTEGER NOT NULL, FirstName NVARCHAR(40) NOT NULL, LastName NVARCHAR(20) NOT NULL, Company NVARCHAR(80), Address NVARCHAR(70), City NVARCHAR(40), State NVARCHAR(40), Country NVARCHAR(40), PostalCode NVARCHAR(10), Phone NVARCHAR(24), Fax NVARCHAR(24), Email NVARCHAR(60) NOT NULL, SupportRepId INTEGER)SELECT * FROM "Customer" LIMIT 3; CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId 1 Luís Gonçalves Embraer - Empresa Brasileira de Aeronáutica S.A. Av. Brigadeiro Faria Lima,2170 São José dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 luisg@embraer.com.br 3 2 Leonie Köhler null Theodor-Heuss-Straße 34 Stuttgart null Germany 70174 +49 0711 2842222 null leonekohler@surfeu.de 5 3 François Tremblay null 1498 rue Bélanger Montréal QC Canada H2G 1A7 +1 (514) 721-4711 null ftremblay@gmail.com 3CREATE TABLE Employee ( EmployeeId INTEGER NOT NULL, LastName NVARCHAR(20) NOT NULL, FirstName NVARCHAR(20) NOT NULL, Title NVARCHAR(30), ReportsTo INTEGER, BirthDate DATETIME, HireDate DATETIME, Address NVARCHAR(70), City NVARCHAR(40), State NVARCHAR(40), Country NVARCHAR(40), PostalCode NVARCHAR(10), Phone NVARCHAR(24), Fax NVARCHAR(24), Email NVARCHAR(60))SELECT * FROM "Employee" LIMIT 3; EmployeeId LastName FirstName Title ReportsTo BirthDate HireDate Address City State Country PostalCode Phone Fax Email 1 Adams Andrew General Manager null 1962-02-18 00:00:00 2002-08-14 00:00:00 11120 Jasper Ave NW Edmonton AB Canada T5K 2N1 +1 (780) 428-9482 +1 (780) 428-3457 andrew@chinookcorp.com 2 Edwards Nancy Sales Manager 1 1958-12-08 00:00:00 2002-05-01 00:00:00 825 8 Ave SW Calgary AB Canada T2P 2T3 +1 (403) 262-3443 +1 (403) 262-3322 nancy@chinookcorp.com 3 Peacock Jane Sales Support Agent 2 1973-08-29 00:00:00 2002-04-01 00:00:00 1111 6 Ave SW Calgary AB Canada T2P 5M5 +1 (403) 262-3443 +1 (403) 262-6712 jane@chinookcorp.comCREATE TABLE Genre ( GenreId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "Genre" LIMIT 3; GenreId Name 1 Rock 2 Jazz 3 MetalCREATE TABLE Invoice ( InvoiceId INTEGER NOT NULL, CustomerId INTEGER NOT NULL, InvoiceDate DATETIME NOT NULL, BillingAddress NVARCHAR(70), BillingCity NVARCHAR(40), BillingState NVARCHAR(40), BillingCountry NVARCHAR(40), BillingPostalCode NVARCHAR(10), Total NUMERIC(10,2) NOT NULL)SELECT * FROM "Invoice" LIMIT 3; InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total 1 2 2009-01-01 00:00:00 Theodor-Heuss-Straße 34 Stuttgart null Germany 70174 1.98 2 4 2009-01-02 00:00:00 Ullevålsveien 14 Oslo null Norway 0171 3.96 3 8 2009-01-03 00:00:00 Grétrystraat 63 Brussels null Belgium 1000 5.94CREATE TABLE InvoiceLine ( InvoiceLineId INTEGER NOT NULL, InvoiceId INTEGER NOT NULL, TrackId INTEGER NOT NULL, UnitPrice NUMERIC(10,2) NOT NULL, Quantity INTEGER NOT NULL)SELECT * FROM "InvoiceLine" LIMIT 3; InvoiceLineId InvoiceId TrackId UnitPrice Quantity 1 1 2 0.99 1 2 1 4 0.99 1 3 2 6 0.99 1CREATE TABLE MediaType ( MediaTypeId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "MediaType" LIMIT 3; MediaTypeId Name 1 MPEG audio file 2 Protected AAC audio file 3 Protected MPEG-4 video fileCREATE TABLE Playlist ( PlaylistId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "Playlist" LIMIT 3; PlaylistId Name 1 Music 2 Movies 3 TV ShowsCREATE TABLE PlaylistTrack ( PlaylistId INTEGER NOT NULL, TrackId INTEGER NOT NULL)SELECT * FROM "PlaylistTrack" LIMIT 3; PlaylistId TrackId 1 3402 1 3389 1 3390CREATE TABLE Track ( TrackId INTEGER NOT NULL, Name NVARCHAR(200) NOT NULL, AlbumId INTEGER, MediaTypeId INTEGER NOT NULL, GenreId INTEGER, Composer NVARCHAR(220), Milliseconds INTEGER NOT NULL, Bytes INTEGER, UnitPrice NUMERIC(10,2) NOT NULL)SELECT * FROM "Track" LIMIT 3; TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young,Malcolm Young,Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 U. Dirkschneider,W. Hoffmann,H. Frank,P. Baltes,S. Kaufmann,G. Hoffmann 342562 5510424 0.99 3 Fast As a Shark 3 2 1 F. Baltes,S. Kaufman,U. Dirkscneider & W. Hoffman 230619 3990994 0.99 */
#### API Reference:
Few-shot examples[](#few-shot-examples "Direct link to Few-shot examples")
---------------------------------------------------------------------------
Including examples of natural language questions being converted to valid SQL queries against our database in the prompt will often improve model performance, especially for complex queries.
Let's say we have the following examples:
export const examples = [ { input: "List all artists.", query: "SELECT * FROM Artist;" }, { input: "Find all albums for the artist 'AC/DC'.", query: "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');", }, { input: "List all tracks in the 'Rock' genre.", query: "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');", }, { input: "Find the total duration of all tracks.", query: "SELECT SUM(Milliseconds) FROM Track;", }, { input: "List all customers from Canada.", query: "SELECT * FROM Customer WHERE Country = 'Canada';", }, { input: "How many tracks are there in the album with ID 5?", query: "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;", }, { input: "Find the total number of invoices.", query: "SELECT COUNT(*) FROM Invoice;", }, { input: "List all tracks that are longer than 5 minutes.", query: "SELECT * FROM Track WHERE Milliseconds > 300000;", }, { input: "Who are the top 5 customers by total purchase?", query: "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;", }, { input: "Which albums are from the year 2000?", query: "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';", }, { input: "How many employees are there", query: 'SELECT COUNT(*) FROM "Employee"', },];
#### API Reference:
We can create a few-shot prompt with them like so:
import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";import { examples } from "./examples.js";const examplePrompt = PromptTemplate.fromTemplate( `User input: {input}\nSQL Query: {query}`);const prompt = new FewShotPromptTemplate({ examples: examples.slice(0, 5), examplePrompt, prefix: `You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than {top_k} rows.Here is the relevant table info: {table_info}Below are a number of examples of questions and their corresponding SQL queries.`, suffix: "User input: {input}\nSQL query: ", inputVariables: ["input", "top_k", "table_info"],});console.log( await prompt.format({ input: "How many artists are there?", top_k: "3", table_info: "foo", }));/**You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than 3 rows.Here is the relevant table info: fooBelow are a number of examples of questions and their corresponding SQL queries.User input: List all artists.SQL Query: SELECT * FROM Artist;User input: Find all albums for the artist 'AC/DC'.SQL Query: SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');User input: List all tracks in the 'Rock' genre.SQL Query: SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');User input: Find the total duration of all tracks.SQL Query: SELECT SUM(Milliseconds) FROM Track;User input: List all customers from Canada.SQL Query: SELECT * FROM Customer WHERE Country = 'Canada';User input: How many artists are there?SQL query: */
#### API Reference:
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Dynamic few-shot examples[](#dynamic-few-shot-examples "Direct link to Dynamic few-shot examples")
---------------------------------------------------------------------------------------------------
If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.
We can do just this using an ExampleSelector. In this case we'll use a [`SemanticSimilarityExampleSelector`](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones:
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";import { createSqlQueryChain } from "langchain/chains/sql_db";import { examples } from "./examples.js";import { db } from "../db.js";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples< typeof MemoryVectorStore>(examples, new OpenAIEmbeddings(), MemoryVectorStore, { k: 5, inputKeys: ["input"],});console.log( await exampleSelector.selectExamples({ input: "how many artists are there?" }));/**[ { input: 'List all artists.', query: 'SELECT * FROM Artist;' }, { input: 'How many employees are there', query: 'SELECT COUNT(*) FROM "Employee"' }, { input: 'How many tracks are there in the album with ID 5?', query: 'SELECT COUNT(*) FROM Track WHERE AlbumId = 5;' }, { input: 'Which albums are from the year 2000?', query: "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';" }, { input: "List all tracks in the 'Rock' genre.", query: "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');" }] */// To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:const examplePrompt = PromptTemplate.fromTemplate( `User input: {input}\nSQL Query: {query}`);const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, prefix: `You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than {top_k} rows.Here is the relevant table info: {table_info}Below are a number of examples of questions and their corresponding SQL queries.`, suffix: "User input: {input}\nSQL query: ", inputVariables: ["input", "top_k", "table_info"],});console.log( await prompt.format({ input: "How many artists are there?", top_k: "3", table_info: "foo", }));/**You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than 3 rows.Here is the relevant table info: fooBelow are a number of examples of questions and their corresponding SQL queries.User input: List all artists.SQL Query: SELECT * FROM Artist;User input: How many employees are thereSQL Query: SELECT COUNT(*) FROM "Employee"User input: How many tracks are there in the album with ID 5?SQL Query: SELECT COUNT(*) FROM Track WHERE AlbumId = 5;User input: Which albums are from the year 2000?SQL Query: SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';User input: List all tracks in the 'Rock' genre.SQL Query: SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');User input: How many artists are there?SQL query: */// Now we can use it in a chain:const llm = new ChatOpenAI({ temperature: 0,});const chain = await createSqlQueryChain({ db, llm, prompt, dialect: "sqlite",});console.log(await chain.invoke({ question: "how many artists are there?" }));/**SELECT COUNT(*) FROM Artist; */
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [SemanticSimilarityExampleSelector](https://v02.api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [FewShotPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned about some prompting strategies to improve SQL generation.
Next, check out some of the other guides in this section, like [how to query over large databases](/v0.2/docs/how_to/sql_large_db).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to deal with large databases
](/v0.2/docs/how_to/sql_large_db)[
Next
How to do query validation
](/v0.2/docs/how_to/sql_query_checking)
* [Setup](#setup)
* [Dialect-specific prompting](#dialect-specific-prompting)
* [Table definitions and example rows](#table-definitions-and-example-rows)
* [Few-shot examples](#few-shot-examples)
* [Dynamic few-shot examples](#dynamic-few-shot-examples)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/how_to/sql_query_checking | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to do query validation
On this page
How to do query validation
==========================
Prerequisites
This guide assumes familiarity with the following:
* [Question answering over SQL data](/v0.2/docs/tutorials/sql_qa)
Perhaps the most error-prone part of any SQL chain or agent is writing valid and safe SQL queries. In this guide we'll go over some strategies for validating our queries and handling invalid queries.
Setup[](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
npm install @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Query checker[](#query-checker "Direct link to Query checker")
---------------------------------------------------------------
Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate, PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const chain = await createSqlQueryChain({ llm, db, dialect: "sqlite",});/** * And we want to validate its outputs. We can do so by extending the chain with a second prompt and model call: */const SYSTEM_PROMPT = `Double check the user's {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.Output the final SQL query only.`;const prompt = await ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT], ["human", "{query}"],]).partial({ dialect: "sqlite" });const validationChain = prompt.pipe(llm).pipe(new StringOutputParser());const fullChain = RunnableSequence.from([ { query: async (i: { question: string }) => chain.invoke(i), }, validationChain,]);const query = await fullChain.invoke({ question: "What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010",});console.log("query", query);/**query SELECT AVG("Total") FROM "Invoice" WHERE "CustomerId" IN (SELECT "CustomerId" FROM "Customer" WHERE "Country" = 'USA' AND "Fax" IS NULL) AND "InvoiceDate" BETWEEN '2003-01-01 00:00:00' AND '2009-12-31 23:59:59' */console.log("db query results", await db.run(query));/**db query results [{"AVG(\"Total\")":6.632999999999998}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/d1131395-8477-47cd-8f74-e0c5491ea956/r// -------------// The obvious downside of this approach is that we need to make two model calls instead of one to generate our query.// To get around this we can try to perform the query generation and query check in a single model invocation:const SYSTEM_PROMPT_2 = `You are a {dialect} expert. Given an input question, create a syntactically correct {dialect} query to run.Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database.Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Pay attention to use date('now') function to get the current date, if the question involves "today".Only use the following tables:{table_info}Write an initial draft of the query. Then double check the {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsUse format:First draft: <<FIRST_DRAFT_QUERY>>Final answer: <<FINAL_ANSWER_QUERY>>`;const prompt2 = await PromptTemplate.fromTemplate( `System: ${SYSTEM_PROMPT_2}Human: {input}`).partial({ dialect: "sqlite" });const parseFinalAnswer = (output: string): string => output.split("Final answer: ")[1];const chain2 = ( await createSqlQueryChain({ llm, db, prompt: prompt2, dialect: "sqlite", })).pipe(parseFinalAnswer);const query2 = await chain2.invoke({ question: "What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010",});console.log("query2", query2);/**query2 SELECT AVG("Total") FROM "Invoice" WHERE "CustomerId" IN (SELECT "CustomerId" FROM "Customer" WHERE "Country" = 'USA' AND "Fax" IS NULL) AND date("InvoiceDate") BETWEEN date('2003-01-01') AND date('2009-12-31') LIMIT 5 */console.log("db query results", await db.run(query2));/**db query results [{"AVG(\"Total\")":6.632999999999998}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/e21d6146-eca9-4de6-a078-808fd09979ea/r// -------------
#### API Reference:
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://v02.api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://v02.api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned about some strategies to validate generated SQL queries.
Next, check out some of the other guides in this section, like [how to query over large databases](/v0.2/docs/how_to/sql_large_db).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use prompting to improve results
](/v0.2/docs/how_to/sql_prompting)[
Next
How to stream
](/v0.2/docs/how_to/streaming)
* [Setup](#setup)
* [Query checker](#query-checker)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Vector stores
Vector stores
=============
[
📄️ Memory
----------
MemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance.
](/v0.2/docs/integrations/vectorstores/memory)
[
📄️ AnalyticDB
--------------
AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
](/v0.2/docs/integrations/vectorstores/analyticdb)
[
📄️ Astra DB
------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/astradb)
[
📄️ Azure AI Search
-------------------
Azure AI Search (formerly known as Azure Search and Azure Cognitive Search) is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads on Azure. It supports also vector search using the k-nearest neighbor (kNN) algorithm and also semantic search.
](/v0.2/docs/integrations/vectorstores/azure_aisearch)
[
📄️ Azure Cosmos DB
-------------------
Azure Cosmos DB for MongoDB vCore makes it easy to create a database with full native MongoDB support. You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account’s connection string. Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that’s stored in Azure Cosmos DB.
](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
[
📄️ Cassandra
-------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/cassandra)
[
📄️ Chroma
----------
Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.
](/v0.2/docs/integrations/vectorstores/chroma)
[
📄️ ClickHouse
--------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/clickhouse)
[
📄️ CloseVector
---------------
available on both browser and Node.js
](/v0.2/docs/integrations/vectorstores/closevector)
[
📄️ Cloudflare Vectorize
------------------------
If you're deploying your project in a Cloudflare worker, you can use Cloudflare Vectorize with LangChain.js.
](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
[
📄️ Convex
----------
LangChain.js supports Convex as a vector store, and supports the standard similarity search.
](/v0.2/docs/integrations/vectorstores/convex)
[
📄️ Couchbase
-------------
Couchbase is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile,
](/v0.2/docs/integrations/vectorstores/couchbase)
[
📄️ Elasticsearch
-----------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/elasticsearch)
[
📄️ Faiss
---------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/faiss)
[
📄️ Google Vertex AI Matching Engine
------------------------------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/googlevertexai)
[
📄️ SAP HANA Cloud Vector Engine
--------------------------------
SAP HANA Cloud Vector Engine is a vector store fully integrated into the SAP HANA Cloud database.
](/v0.2/docs/integrations/vectorstores/hanavector)
[
📄️ HNSWLib
-----------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/hnswlib)
[
📄️ LanceDB
-----------
LanceDB is an embedded vector database for AI applications. It is open source and distributed with an Apache-2.0 license.
](/v0.2/docs/integrations/vectorstores/lancedb)
[
📄️ Milvus
----------
Milvus is a vector database built for embeddings similarity search and AI applications.
](/v0.2/docs/integrations/vectorstores/milvus)
[
📄️ Momento Vector Index (MVI)
------------------------------
MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Whether in Node.js, browser, or edge, Momento has you covered.
](/v0.2/docs/integrations/vectorstores/momento_vector_index)
[
📄️ MongoDB Atlas
-----------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
[
📄️ MyScale
-----------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/myscale)
[
📄️ Neo4j Vector Index
----------------------
Neo4j is an open-source graph database with integrated support for vector similarity search.
](/v0.2/docs/integrations/vectorstores/neo4jvector)
[
📄️ Neon Postgres
-----------------
Neon is a fully managed serverless PostgreSQL database. It separates storage and compute to offer
](/v0.2/docs/integrations/vectorstores/neon)
[
📄️ OpenSearch
--------------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/opensearch)
[
📄️ PGVector
------------
To enable vector search in a generic PostgreSQL database, LangChain.js supports using the pgvector Postgres extension.
](/v0.2/docs/integrations/vectorstores/pgvector)
[
📄️ Pinecone
------------
You can use Pinecone vectorstores with LangChain.
](/v0.2/docs/integrations/vectorstores/pinecone)
[
📄️ Prisma
----------
For augmenting existing models in PostgreSQL database with vector search, Langchain supports using Prisma together with PostgreSQL and pgvector Postgres extension.
](/v0.2/docs/integrations/vectorstores/prisma)
[
📄️ Qdrant
----------
Qdrant is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload.
](/v0.2/docs/integrations/vectorstores/qdrant)
[
📄️ Redis
---------
Redis is a fast open source, in-memory data store.
](/v0.2/docs/integrations/vectorstores/redis)
[
📄️ Rockset
-----------
Rockset is a real-time analyitics SQL database that runs in the cloud.
](/v0.2/docs/integrations/vectorstores/rockset)
[
📄️ SingleStore
---------------
SingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premise. It provides vector storage, as well as vector functions like dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.
](/v0.2/docs/integrations/vectorstores/singlestore)
[
📄️ Supabase
------------
Langchain supports using Supabase Postgres database as a vector store, using the pgvector postgres extension. Refer to the Supabase blog post for more information.
](/v0.2/docs/integrations/vectorstores/supabase)
[
📄️ Tigris
----------
Tigris makes it easy to build AI applications with vector embeddings.
](/v0.2/docs/integrations/vectorstores/tigris)
[
📄️ Turbopuffer
---------------
Setup
](/v0.2/docs/integrations/vectorstores/turbopuffer)
[
📄️ TypeORM
-----------
To enable vector search in a generic PostgreSQL database, LangChain.js supports using TypeORM with the pgvector Postgres extension.
](/v0.2/docs/integrations/vectorstores/typeorm)
[
📄️ Typesense
-------------
Vector store that utilizes the Typesense search engine.
](/v0.2/docs/integrations/vectorstores/typesense)
[
📄️ Upstash Vector
------------------
Upstash Vector is a REST based serverless vector database, designed for working with vector embeddings.
](/v0.2/docs/integrations/vectorstores/upstash)
[
📄️ USearch
-----------
Only available on Node.js.
](/v0.2/docs/integrations/vectorstores/usearch)
[
📄️ Vectara
-----------
Vectara is a platform for building GenAI applications. It provides an easy-to-use API for document indexing and querying that is managed by Vectara and is optimized for performance and accuracy.
](/v0.2/docs/integrations/vectorstores/vectara)
[
📄️ Vercel Postgres
-------------------
LangChain.js supports using the @vercel/postgres package to use generic Postgres databases
](/v0.2/docs/integrations/vectorstores/vercel_postgres)
[
📄️ Voy
-------
Voy is a WASM vector similarity search engine written in Rust.
](/v0.2/docs/integrations/vectorstores/voy)
[
📄️ Weaviate
------------
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering.
](/v0.2/docs/integrations/vectorstores/weaviate)
[
📄️ Xata
--------
Xata is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.
](/v0.2/docs/integrations/vectorstores/xata)
[
📄️ Zep
-------
Zep is a long-term memory service for AI Assistant apps.
](/v0.2/docs/integrations/vectorstores/zep)
[
Previous
OpenAI functions metadata tagger
](/v0.2/docs/integrations/document_transformers/openai_metadata_tagger)[
Next
Memory
](/v0.2/docs/integrations/vectorstores/memory)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/hnswlib | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* HNSWLib
On this page
HNSWLib
=======
Compatibility
Only available on Node.js.
HNSWLib is an in-memory vectorstore that can be saved to a file. It uses [HNSWLib](https://github.com/nmslib/hnswlib).
Setup[](#setup "Direct link to Setup")
---------------------------------------
caution
**On Windows**, you might need to install [Visual Studio](https://visualstudio.microsoft.com/downloads/) first in order to properly build the `hnswlib-node` package.
You can install it with
* npm
* Yarn
* pnpm
npm install hnswlib-node
yarn add hnswlib-node
pnpm add hnswlib-node
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Save an index to a file and load it again[](#save-an-index-to-a-file-and-load-it-again "Direct link to Save an index to a file and load it again")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Filter documents[](#filter-documents "Direct link to Filter documents")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);
#### API Reference:
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Delete index[](#delete-index "Direct link to Delete index")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });
#### API Reference:
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
SAP HANA Cloud Vector Engine
](/v0.2/docs/integrations/vectorstores/hanavector)[
Next
LanceDB
](/v0.2/docs/integrations/vectorstores/lancedb)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Save an index to a file and load it again](#save-an-index-to-a-file-and-load-it-again)
* [Filter documents](#filter-documents)
* [Delete index](#delete-index)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/faiss | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Faiss
On this page
Faiss
=====
Compatibility
Only available on Node.js.
[Faiss](https://github.com/facebookresearch/faiss) is a library for efficient similarity search and clustering of dense vectors.
Langchainjs supports using Faiss as a vectorstore that can be saved to file. It also provides the ability to read the saved file from [Python's implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading).
Setup[](#setup "Direct link to Setup")
---------------------------------------
Install the [faiss-node](https://github.com/ewfian/faiss-node), which is a Node.js bindings for [Faiss](https://github.com/facebookresearch/faiss).
* npm
* Yarn
* pnpm
npm install -S faiss-node
yarn add faiss-node
pnpm add faiss-node
To enable the ability to read the saved file from [Python's implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading), the [pickleparser](https://github.com/ewfian/pickleparser) also needs to install.
* npm
* Yarn
* pnpm
npm install -S pickleparser
yarn add pickleparser
pnpm add pickleparser
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};
#### API Reference:
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Deleting vectors[](#deleting-vectors "Direct link to Deleting vectors")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";const vectorStore = new FaissStore(new OpenAIEmbeddings(), {});const ids = ["2", "1", "4"];const idsReturned = await vectorStore.addDocuments( [ new Document({ pageContent: "my world", metadata: { tag: 2 }, }), new Document({ pageContent: "our world", metadata: { tag: 1 }, }), new Document({ pageContent: "your world", metadata: { tag: 4 }, }), ], { ids, });console.log(idsReturned);/* [ '2', '1', '4' ]*/const docs = await vectorStore.similaritySearch("my world", 3);console.log(docs);/*[ Document { pageContent: 'my world', metadata: { tag: 2 } }, Document { pageContent: 'your world', metadata: { tag: 4 } }, Document { pageContent: 'our world', metadata: { tag: 1 } }]*/await vectorStore.delete({ ids: [ids[0], ids[1]] });const docs2 = await vectorStore.similaritySearch("my world", 3);console.log(docs2);/*[ Document { pageContent: 'your world', metadata: { tag: 4 } } ]*/
#### API Reference:
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
### Merging indexes and creating new index from another instance[](#merging-indexes-and-creating-new-index-from-another-instance "Direct link to Merging indexes and creating new index from another instance")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};
#### API Reference:
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Save an index to file and load it again[](#save-an-index-to-file-and-load-it-again "Direct link to Save an index to file and load it again")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Load the saved file from [Python's implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading)[](#load-the-saved-file-from-pythons-implementation "Direct link to load-the-saved-file-from-pythons-implementation")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);
#### API Reference:
* [FaissStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Elasticsearch
](/v0.2/docs/integrations/vectorstores/elasticsearch)[
Next
Google Vertex AI Matching Engine
](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Deleting vectors](#deleting-vectors)
* [Merging indexes and creating new index from another instance](#merging-indexes-and-creating-new-index-from-another-instance)
* [Save an index to file and load it again](#save-an-index-to-file-and-load-it-again)
* [Load the saved file from Python's implementation](#load-the-saved-file-from-pythons-implementation)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/lancedb | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* LanceDB
On this page
LanceDB
=======
LanceDB is an embedded vector database for AI applications. It is open source and distributed with an Apache-2.0 license.
LanceDB datasets are persisted to disk and can be shared between Node.js and Python.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Install the [LanceDB](https://github.com/lancedb/lancedb) [Node.js bindings](https://www.npmjs.com/package/vectordb):
* npm
* Yarn
* pnpm
npm install -S vectordb
yarn add vectordb
pnpm add vectordb
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { LanceDB } from "@langchain/community/vectorstores/lancedb";import { OpenAIEmbeddings } from "@langchain/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};
#### API Reference:
* [LanceDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_lancedb.LanceDB.html) from `@langchain/community/vectorstores/lancedb`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { LanceDB } from "@langchain/community/vectorstores/lancedb";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};
#### API Reference:
* [LanceDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_lancedb.LanceDB.html) from `@langchain/community/vectorstores/lancedb`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Open an existing dataset[](#open-an-existing-dataset "Direct link to Open an existing dataset")
import { LanceDB } from "@langchain/community/vectorstores/lancedb";import { OpenAIEmbeddings } from "@langchain/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}
#### API Reference:
* [LanceDB](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_lancedb.LanceDB.html) from `@langchain/community/vectorstores/lancedb`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
HNSWLib
](/v0.2/docs/integrations/vectorstores/hnswlib)[
Next
Milvus
](/v0.2/docs/integrations/vectorstores/milvus)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Open an existing dataset](#open-an-existing-dataset)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/memory | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Memory
`MemoryVectorStore`
===================
MemoryVectorStore is an in-memory, ephemeral vectorstore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by [ml-distance](https://mljs.github.io/distance/modules/similarity.html).
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Use a custom similarity metric[](#use-a-custom-similarity-metric "Direct link to Use a custom similarity metric")
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { similarity } from "ml-distance";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { similarity: similarity.pearson });const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Vector stores
](/v0.2/docs/integrations/vectorstores)[
Next
AnalyticDB
](/v0.2/docs/integrations/vectorstores/analyticdb)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/closevector | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* CloseVector
On this page
CloseVector
===========
Compatibility
available on both browser and Node.js
[CloseVector](https://closevector.getmegaportal.com/) is a cross-platform vector database that can run in both the browser and Node.js. For example, you can create your index on Node.js and then load/query it on browser. For more information, please visit [CloseVector Docs](https://closevector-docs.getmegaportal.com/).
Setup[](#setup "Direct link to Setup")
---------------------------------------
### CloseVector Web[](#closevector-web "Direct link to CloseVector Web")
* npm
* Yarn
* pnpm
npm install -S closevector-web
yarn add closevector-web
pnpm add closevector-web
### CloseVector Node[](#closevector-node "Direct link to CloseVector Node")
* npm
* Yarn
* pnpm
npm install -S closevector-node
yarn add closevector-node
pnpm add closevector-node
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { // If you want to import the browser version, use the following line instead: // const vectorStore = await CloseVectorWeb.fromTexts( const vectorStore = await CloseVectorNode.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};
#### API Reference:
* [CloseVectorNode](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector store// If you want to import the browser version, use the following line instead:// const vectorStore = await CloseVectorWeb.fromDocuments(const vectorStore = await CloseVectorNode.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [CloseVectorNode](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Save an index to CloseVector CDN and load it again[](#save-an-index-to-closevector-cdn-and-load-it-again "Direct link to Save an index to CloseVector CDN and load it again")
CloseVector supports saving/loading indexes to/from cloud. To use this feature, you need to create an account on [CloseVector](https://closevector.getmegaportal.com/). Please read [CloseVector Docs](https://closevector-docs.getmegaportal.com/) and generate your API key first by [loging in](https://closevector.getmegaportal.com/).
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an example// If you want to import the browser version, use the following line instead:// const vectorStore = await CloseVectorWeb.fromTexts(const vectorStore = await CloseVectorNode.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), undefined, { key: "your access key", secret: "your secret", });// Save the vector store to cloudawait vectorStore.saveToCloud({ description: "example", public: true,});const { uuid } = vectorStore.instance;// Load the vector store from cloud// const loadedVectorStore = await CloseVectorWeb.load(const loadedVectorStore = await CloseVectorNode.loadFromCloud({ uuid, embeddings: new OpenAIEmbeddings(), credentials: { key: "your access key", secret: "your secret", },});// If you want to import the node version, use the following lines instead:// const loadedVectorStoreOnNode = await CloseVectorNode.loadFromCloud({// uuid,// embeddings: new OpenAIEmbeddings(),// credentials: {// key: "your access key",// secret: "your secret"// }// });const loadedVectorStoreOnBrowser = await CloseVectorWeb.loadFromCloud({ uuid, embeddings: new OpenAIEmbeddings(), credentials: { key: "your access key", secret: "your secret", },});// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);// orconst resultOnBrowser = await loadedVectorStoreOnBrowser.similaritySearch( "hello world", 1);console.log(resultOnBrowser);
#### API Reference:
* [CloseVectorNode](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [CloseVectorWeb](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_closevector_web.CloseVectorWeb.html) from `@langchain/community/vectorstores/closevector/web`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Save an index to file and load it again[](#save-an-index-to-file-and-load-it-again "Direct link to Save an index to file and load it again")
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an example// If you want to import the browser version, use the following line instead:// const vectorStore = await CloseVectorWeb.fromTexts(const vectorStore = await CloseVectorNode.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directory// If you want to import the browser version, use the following line instead:// const loadedVectorStore = await CloseVectorWeb.load(const loadedVectorStore = await CloseVectorNode.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [CloseVectorNode](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
ClickHouse
](/v0.2/docs/integrations/vectorstores/clickhouse)[
Next
Cloudflare Vectorize
](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Setup](#setup)
* [CloseVector Web](#closevector-web)
* [CloseVector Node](#closevector-node)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Save an index to CloseVector CDN and load it again](#save-an-index-to-closevector-cdn-and-load-it-again)
* [Save an index to file and load it again](#save-an-index-to-file-and-load-it-again)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/chroma | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Chroma
On this page
Chroma
======
> [Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.
[![Discord](https://img.shields.io/discord/1073293645303795742)](https://discord.gg/MMeYNTmh3x) [![License](https://img.shields.io/static/v1?label=license&message=Apache 2.0&color=white)](https://github.com/chroma-core/chroma/blob/master/LICENSE) ![Integration Tests](https://github.com/chroma-core/chroma/actions/workflows/chroma-integration-test.yml/badge.svg?branch=main)
* [Website](https://www.trychroma.com/)
* [Documentation](https://docs.trychroma.com/)
* [Twitter](https://twitter.com/trychroma)
* [Discord](https://discord.gg/MMeYNTmh3x)
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Run Chroma with Docker on your computer
git clone git@github.com:chroma-core/chroma.gitcd chromadocker-compose up -d --build
2. Install the Chroma JS SDK.
* npm
* Yarn
* pnpm
npm install -S chromadb
yarn add chromadb
pnpm add chromadb
Chroma is fully-typed, fully-tested and fully-documented.
Like any other database, you can:
* `.add`
* `.get`
* `.update`
* `.upsert`
* `.delete`
* `.peek`
* and `.query` runs the similarity search.
View full docs at [docs](https://docs.trychroma.com/js_reference/Collection).
Usage, Index and query Documents[](#usage-index-and-query-documents "Direct link to Usage, Index and query Documents")
-----------------------------------------------------------------------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value collectionMetadata: { "hnsw:space": "cosine", }, // Optional, can be used to specify the distance method of the embedding space https://docs.trychroma.com/usage-guide#changing-the-distance-function});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
#### API Reference:
* [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Usage, Index and query texts[](#usage-index-and-query-texts "Direct link to Usage, Index and query texts")
-----------------------------------------------------------------------------------------------------------
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that?", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no!", "Tortoise: But it's only a myth. Courage, Achilles.", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/
#### API Reference:
* [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, Query docs from existing collection[](#usage-query-docs-from-existing-collection "Direct link to Usage, Query docs from existing collection")
-----------------------------------------------------------------------------------------------------------------------------------------------------
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/
#### API Reference:
* [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, delete docs[](#usage-delete-docs "Direct link to Usage, delete docs")
-----------------------------------------------------------------------------
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that?", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no!", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles.", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/
#### API Reference:
* [Chroma](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Cassandra
](/v0.2/docs/integrations/vectorstores/cassandra)[
Next
ClickHouse
](/v0.2/docs/integrations/vectorstores/clickhouse)
* [Setup](#setup)
* [Usage, Index and query Documents](#usage-index-and-query-documents)
* [Usage, Index and query texts](#usage-index-and-query-texts)
* [Usage, Query docs from existing collection](#usage-query-docs-from-existing-collection)
* [Usage, delete docs](#usage-delete-docs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/zep | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Zep
On this page
Zep
===
> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps. With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant, while also reducing hallucinations, latency, and cost.
> Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks), [Zep Cloud Vector Store Example](https://help.getzep.com/langchain/examples/vectorstore-example)
**Note:** The `ZepVectorStore` works with `Documents` and is intended to be used as a `Retriever`. It offers separate functionality to Zep's `ZepMemory` class, which is designed for persisting, enriching and searching your user's chat history.
Why Zep's VectorStore? 🤖🚀[](#why-zeps-vectorstore- "Direct link to Why Zep's VectorStore? 🤖🚀")
---------------------------------------------------------------------------------------------------
Zep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server. The Zep TS/JS client can be used in non-Node edge environments. These two together with Zep's chat memory functionality make Zep ideal for building conversational LLM apps where latency and performance are important.
### Supported Search Types[](#supported-search-types "Direct link to Supported Search Types")
Zep supports both similarity search and Maximal Marginal Relevance (MMR) search. MMR search is particularly useful for Retrieval Augmented Generation applications as it re-ranks results to ensure diversity in the returned documents.
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
Follow the [Zep Quickstart Guide](https://docs.getzep.com/deployment/quickstart/) to install and get started with Zep.
Usage[](#usage "Direct link to Usage")
---------------------------------------
You'll need your Zep API URL and optionally an API key to use the Zep VectorStore. See the [Zep docs](https://docs.getzep.com) for more information.
In the examples below, we're using Zep's auto-embedding feature which automatically embed documents on the Zep server using low-latency embedding models. Since LangChain requires passing in a `Embeddings` instance, we pass in `FakeEmbeddings`.
**Note:** If you pass in an `Embeddings` instance other than `FakeEmbeddings`, this class will be used to embed documents. You must also set your document collection to `isAutoEmbedded === false`. See the `OpenAIEmbeddings` example below.
### Example: Creating a ZepVectorStore from Documents & Querying[](#example-creating-a-zepvectorstore-from-documents--querying "Direct link to Example: Creating a ZepVectorStore from Documents & Querying")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { ZepVectorStore } from "@langchain/community/vectorstores/zep";import { FakeEmbeddings } from "@langchain/core/utils/testing";import { TextLoader } from "langchain/document_loaders/fs/text";import { randomUUID } from "crypto";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: true, // If true, the vector store will automatically embed documents when they are added }; const embeddings = new FakeEmbeddings(); const vectorStore = await ZepVectorStore.fromDocuments( docs, embeddings, zepConfig ); // Wait for the documents to be embedded // eslint-disable-next-line no-constant-condition while (true) { const c = await vectorStore.client.document.getCollection(collectionName); console.log( `Embedding status: ${c.document_embedded_count}/${c.document_count} documents embedded` ); // eslint-disable-next-line no-promise-executor-return await new Promise((resolve) => setTimeout(resolve, 1000)); if (c.status === "ready") { break; } } const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.stringify(results2));};
#### API Reference:
* [ZepVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_zep.ZepVectorStore.html) from `@langchain/community/vectorstores/zep`
* [FakeEmbeddings](https://v02.api.js.langchain.com/classes/langchain_core_utils_testing.FakeEmbeddings.html) from `@langchain/core/utils/testing`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Example: Querying a ZepVectorStore using a metadata filter[](#example-querying-a-zepvectorstore-using-a-metadata-filter "Direct link to Example: Querying a ZepVectorStore using a metadata filter")
import { ZepVectorStore } from "@langchain/community/vectorstores/zep";import { FakeEmbeddings } from "@langchain/core/utils/testing";import { randomUUID } from "crypto";import { Document } from "@langchain/core/documents";const docs = [ new Document({ metadata: { album: "Led Zeppelin IV", year: 1971 }, pageContent: "Stairway to Heaven is one of the most iconic songs by Led Zeppelin.", }), new Document({ metadata: { album: "Led Zeppelin I", year: 1969 }, pageContent: "Dazed and Confused was a standout track on Led Zeppelin's debut album.", }), new Document({ metadata: { album: "Physical Graffiti", year: 1975 }, pageContent: "Kashmir, from Physical Graffiti, showcases Led Zeppelin's unique blend of rock and world music.", }), new Document({ metadata: { album: "Houses of the Holy", year: 1973 }, pageContent: "The Rain Song is a beautiful, melancholic piece from Houses of the Holy.", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs.", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", year: 1982, }, pageContent: "The Number of the Beast is often considered Iron Maiden's best album.", }), new Document({ metadata: { band: "Metallica", album: "Master of Puppets", year: 1986 }, pageContent: "Master of Puppets is widely regarded as Metallica's finest work.", }), new Document({ metadata: { band: "Megadeth", album: "Rust in Peace", year: 1990 }, pageContent: "Rust in Peace is Megadeth's fourth studio album and features intricate guitar work.", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: true, // If true, the vector store will automatically embed documents when they are added }; const embeddings = new FakeEmbeddings(); const vectorStore = await ZepVectorStore.fromDocuments( docs, embeddings, zepConfig ); // Wait for the documents to be embedded // eslint-disable-next-line no-constant-condition while (true) { const c = await vectorStore.client.document.getCollection(collectionName); console.log( `Embedding status: ${c.document_embedded_count}/${c.document_count} documents embedded` ); // eslint-disable-next-line no-promise-executor-return await new Promise((resolve) => setTimeout(resolve, 1000)); if (c.status === "ready") { break; } } vectorStore .similaritySearchWithScore("sad music", 3, { where: { jsonpath: "$[*] ? (@.year == 1973)" }, // We should see a single result: The Rain Song }) .then((results) => { console.log(`\n\nSimilarity Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } }); // We're not filtering here, but rather demonstrating MMR at work. // We could also add a filter to the MMR search, as we did with the similarity search above. vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};
#### API Reference:
* [ZepVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_zep.ZepVectorStore.html) from `@langchain/community/vectorstores/zep`
* [FakeEmbeddings](https://v02.api.js.langchain.com/classes/langchain_core_utils_testing.FakeEmbeddings.html) from `@langchain/core/utils/testing`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
### Example: Using a LangChain Embedding Class such as `OpenAIEmbeddings`[](#example-using-a-langchain-embedding-class-such-as-openaiembeddings "Direct link to example-using-a-langchain-embedding-class-such-as-openaiembeddings")
import { ZepVectorStore } from "@langchain/community/vectorstores/zep";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { randomUUID } from "crypto";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-embedding }; const embeddings = new OpenAIEmbeddings(); const vectorStore = await ZepVectorStore.fromDocuments( docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.stringify(results2));};
#### API Reference:
* [ZepVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_zep.ZepVectorStore.html) from `@langchain/community/vectorstores/zep`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Xata
](/v0.2/docs/integrations/vectorstores/xata)[
Next
Retrievers
](/v0.2/docs/integrations/retrievers)
* [Why Zep's VectorStore? 🤖🚀](#why-zeps-vectorstore-)
* [Supported Search Types](#supported-search-types)
* [Installation](#installation)
* [Usage](#usage)
* [Example: Creating a ZepVectorStore from Documents & Querying](#example-creating-a-zepvectorstore-from-documents--querying)
* [Example: Querying a ZepVectorStore using a metadata filter](#example-querying-a-zepvectorstore-using-a-metadata-filter)
* [Example: Using a LangChain Embedding Class such as `OpenAIEmbeddings`](#example-using-a-langchain-embedding-class-such-as-openaiembeddings)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/weaviate | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Weaviate
Weaviate
========
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. LangChain connects to Weaviate via the `weaviate-ts-client` package, the official Typescript client for Weaviate.
LangChain inserts vectors directly to Weaviate, and queries Weaviate for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with Weaviate.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Weaviate has their own standalone integration package with LangChain, accessible via [`@langchain/weaviate`](https://www.npmjs.com/package/@langchain/weaviate) on NPM!
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/weaviate @langchain/openai @langchain/community
yarn add @langchain/weaviate @langchain/openai @langchain/community
pnpm add @langchain/weaviate @langchain/openai @langchain/community
You'll need to run Weaviate either locally or on a server, see [the Weaviate documentation](https://weaviate.io/developers/weaviate/installation) for more information.
Usage, insert documents[](#usage-insert-documents "Direct link to Usage, insert documents")
--------------------------------------------------------------------------------------------
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store and fill it with some texts + metadata await WeaviateStore.fromTexts( ["hello world", "hi there", "how are you", "bye now"], [{ foo: "bar" }, { foo: "baz" }, { foo: "qux" }, { foo: "bar" }], new OpenAIEmbeddings(), { client, indexName: "Test", textKey: "text", metadataKeys: ["foo"], } );}
#### API Reference:
* [WeaviateStore](https://v02.api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, query documents[](#usage-query-documents "Direct link to Usage, query documents")
-----------------------------------------------------------------------------------------
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store for an existing index const store = await WeaviateStore.fromExistingIndex(new OpenAIEmbeddings(), { client, indexName: "Test", metadataKeys: ["foo"], }); // Search the index without any filters const results = await store.similaritySearch("hello world", 1); console.log(results); /* [ Document { pageContent: 'hello world', metadata: { foo: 'bar' } } ] */ // Search the index with a filter, in this case, only return results where // the "foo" metadata key is equal to "baz", see the Weaviate docs for more // https://weaviate.io/developers/weaviate/api/graphql/filters const results2 = await store.similaritySearch("hello world", 1, { where: { operator: "Equal", path: ["foo"], valueText: "baz", }, }); console.log(results2); /* [ Document { pageContent: 'hi there', metadata: { foo: 'baz' } } ] */}
#### API Reference:
* [WeaviateStore](https://v02.api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, maximal marginal relevance[](#usage-maximal-marginal-relevance "Direct link to Usage, maximal marginal relevance")
--------------------------------------------------------------------------------------------------------------------------
You can use maximal marginal relevance search, which optimizes for similarity to the query AND diversity.
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store for an existing index const store = await WeaviateStore.fromExistingIndex(new OpenAIEmbeddings(), { client, indexName: "Test", metadataKeys: ["foo"], }); const resultOne = await store.maxMarginalRelevanceSearch("Hello world", { k: 1, }); console.log(resultOne);}
#### API Reference:
* [WeaviateStore](https://v02.api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, delete documents[](#usage-delete-documents "Direct link to Usage, delete documents")
--------------------------------------------------------------------------------------------
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store for an existing index const store = await WeaviateStore.fromExistingIndex(new OpenAIEmbeddings(), { client, indexName: "Test", metadataKeys: ["foo"], }); const docs = [{ pageContent: "see ya!", metadata: { foo: "bar" } }]; // Also supports an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); // Search the index without any filters const results = await store.similaritySearch("see ya!", 1); console.log(results); /* [ Document { pageContent: 'see ya!', metadata: { foo: 'bar' } } ] */ // Delete documents with ids await store.delete({ ids }); const results2 = await store.similaritySearch("see ya!", 1); console.log(results2); /* [] */ const docs2 = [ { pageContent: "hello world", metadata: { foo: "bar" } }, { pageContent: "hi there", metadata: { foo: "baz" } }, { pageContent: "how are you", metadata: { foo: "qux" } }, { pageContent: "hello world", metadata: { foo: "bar" } }, { pageContent: "bye now", metadata: { foo: "bar" } }, ]; await store.addDocuments(docs2); const results3 = await store.similaritySearch("hello world", 1); console.log(results3); /* [ Document { pageContent: 'hello world', metadata: { foo: 'bar' } } ] */ // delete documents with filter await store.delete({ filter: { where: { operator: "Equal", path: ["foo"], valueText: "bar", }, }, }); const results4 = await store.similaritySearch("hello world", 1, { where: { operator: "Equal", path: ["foo"], valueText: "bar", }, }); console.log(results4); /* [] */}
#### API Reference:
* [WeaviateStore](https://v02.api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Voy
](/v0.2/docs/integrations/vectorstores/voy)[
Next
Xata
](/v0.2/docs/integrations/vectorstores/xata)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/supabase | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Supabase
On this page
Supabase
========
Langchain supports using Supabase Postgres database as a vector store, using the `pgvector` postgres extension. Refer to the [Supabase blog post](https://supabase.com/blog/openai-embeddings-postgres-vector) for more information.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install the library with[](#install-the-library-with "Direct link to Install the library with")
* npm
* Yarn
* pnpm
npm install -S @supabase/supabase-js
yarn add @supabase/supabase-js
pnpm add @supabase/supabase-js
### Create a table and search function in your database[](#create-a-table-and-search-function-in-your-database "Direct link to Create a table and search function in your database")
Run this in your database:
-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, embedding jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata, (embedding::text)::jsonb as embedding, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Standard Usage[](#standard-usage "Direct link to Standard Usage")
The below example shows how to perform a basic similarity search with Supabase:
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};
#### API Reference:
* [SupabaseVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Metadata Filtering[](#metadata-filtering "Direct link to Metadata Filtering")
Given the above `match_documents` Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the `match_documents` function will use the Postgres JSONB Containment operator `@>` to filter documents by the metadata field values you specify. See details on the [Postgres JSONB Containment operator](https://www.postgresql.org/docs/current/datatype-json.html#JSON-CONTAINMENT) for more information.
**Note:** If you've previously been using `SupabaseVectorStore`, you may need to drop and recreate the `match_documents` function per the updated SQL above to use this functionality.
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};
#### API Reference:
* [SupabaseVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Metadata Query Builder Filtering[](#metadata-query-builder-filtering "Direct link to Metadata Query Builder Filtering")
You can also use query builder-style filtering similar to how [the Supabase JavaScript library works](https://supabase.com/docs/reference/javascript/using-filters) instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (`->` for integer or `->>` for text) as defined in [Postgrest API documentation](https://postgrest.org/en/stable/references/api/tables_views.html?highlight=operators#json-columns) and specify the data type of the property (e.g. the column should look something like `metadata->some_int_value::int`).
import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle.", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};
#### API Reference:
* [SupabaseFilterRPCCall](https://v02.api.js.langchain.com/types/langchain_community_vectorstores_supabase.SupabaseFilterRPCCall.html) from `@langchain/community/vectorstores/supabase`
* [SupabaseVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Maximal marginal relevance[](#maximal-marginal-relevance "Direct link to Maximal marginal relevance")
You can use maximal marginal relevance search, which optimizes for similarity to the query AND diversity.
**Note:** If you've previously been using `SupabaseVectorStore`, you may need to drop and recreate the `match_documents` function per the updated SQL above to use this functionality.
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.maxMarginalRelevanceSearch( "Hello world", { k: 1 } ); console.log(resultOne);};
#### API Reference:
* [SupabaseVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Document deletion[](#document-deletion "Direct link to Document deletion")
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};
#### API Reference:
* [SupabaseVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
SingleStore
](/v0.2/docs/integrations/vectorstores/singlestore)[
Next
Tigris
](/v0.2/docs/integrations/vectorstores/tigris)
* [Setup](#setup)
* [Install the library with](#install-the-library-with)
* [Create a table and search function in your database](#create-a-table-and-search-function-in-your-database)
* [Usage](#usage)
* [Standard Usage](#standard-usage)
* [Metadata Filtering](#metadata-filtering)
* [Metadata Query Builder Filtering](#metadata-query-builder-filtering)
* [Maximal marginal relevance](#maximal-marginal-relevance)
* [Document deletion](#document-deletion)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/pinecone | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Pinecone
On this page
Pinecone
========
You can use [Pinecone](https://www.pinecone.io/) vectorstores with LangChain. To get started, install the integration package and the official Pinecone SDK with:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/pinecone @pinecone-database/pinecone
yarn add @langchain/pinecone @pinecone-database/pinecone
pnpm add @langchain/pinecone @pinecone-database/pinecone
The below examples use OpenAI embeddings, but you can swap in whichever provider you'd like. Keep in mind different embeddings models may have a different number of dimensions:
* npm
* Yarn
* pnpm
npm install -S @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Index docs[](#index-docs "Direct link to Index docs")
------------------------------------------------------
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { Document } from "@langchain/core/documents";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "pinecone is a vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "pinecones are the woody fruiting body and of a pine tree", }),];await PineconeStore.fromDocuments(docs, new OpenAIEmbeddings(), { pineconeIndex, maxConcurrency: 5, // Maximum number of batch requests to allow at once. Each batch is 1000 vectors.});
#### API Reference:
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://v02.api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
Query docs[](#query-docs "Direct link to Query docs")
------------------------------------------------------
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });/* Search the vector DB independently with metadata filters */const results = await vectorStore.similaritySearch("pinecone", 1, { foo: "bar",});console.log(results);/* [ Document { pageContent: 'pinecone is a vector db', metadata: { foo: 'bar' } } ]*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://v02.api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
Delete docs[](#delete-docs "Direct link to Delete docs")
---------------------------------------------------------
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { Document } from "@langchain/core/documents";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const embeddings = new OpenAIEmbeddings();const pineconeStore = new PineconeStore(embeddings, { pineconeIndex });const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "pinecone is a vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "pinecones are the woody fruiting body and of a pine tree", }),];const pageContent = "some arbitrary content";// Also takes an additional {ids: []} parameter for upsertionconst ids = await pineconeStore.addDocuments(docs);const results = await pineconeStore.similaritySearch(pageContent, 2, { foo: "bar",});console.log(results);/*[ Document { pageContent: 'pinecone is a vector db', metadata: { foo: 'bar' }, }, Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: { foo: "bar" }, }]*/await pineconeStore.delete({ ids: [ids[0], ids[1]],});const results2 = await pineconeStore.similaritySearch(pageContent, 2, { foo: "bar",});console.log(results2);/* []*/
#### API Reference:
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://v02.api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
Maximal marginal relevance search[](#maximal-marginal-relevance-search "Direct link to Maximal marginal relevance search")
---------------------------------------------------------------------------------------------------------------------------
Pinecone supports maximal marginal relevance search, which takes a combination of documents that are most similar to the inputs, then reranks and optimizes for diversity.
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });/* Search the vector DB independently with meta filters */const results = await vectorStore.maxMarginalRelevanceSearch("pinecone", { k: 5, fetchK: 20, // Default value for the number of initial documents to fetch for reranking. // You can pass a filter as well // filter: {},});console.log(results);
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://v02.api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
PGVector
](/v0.2/docs/integrations/vectorstores/pgvector)[
Next
Prisma
](/v0.2/docs/integrations/vectorstores/prisma)
* [Index docs](#index-docs)
* [Query docs](#query-docs)
* [Delete docs](#delete-docs)
* [Maximal marginal relevance search](#maximal-marginal-relevance-search)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/singlestore | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* SingleStore
On this page
SingleStore
===========
[SingleStoreDB](https://singlestore.com/) is a high-performance distributed SQL database that supports deployment both in the [cloud](https://www.singlestore.com/cloud/) and on-premise. It provides vector storage, as well as vector functions like [dot\_product](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/dot_product.html) and [euclidean\_distance](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/euclidean_distance.html), thereby supporting AI applications that require text similarity matching.
Compatibility
Only available on Node.js.
LangChain.js requires the `mysql2` library to create a connection to a SingleStoreDB instance.
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Establish a SingleStoreDB environment. You have the flexibility to choose between [Cloud-based](https://docs.singlestore.com/managed-service/en/getting-started-with-singlestoredb-cloud.html) or [On-Premise](https://docs.singlestore.com/db/v8.1/en/developer-resources/get-started-using-singlestoredb-for-free.html) editions.
2. Install the mysql2 JS client
* npm
* Yarn
* pnpm
npm install -S mysql2
yarn add mysql2
pnpm add mysql2
Usage[](#usage "Direct link to Usage")
---------------------------------------
`SingleStoreVectorStore` manages a connection pool. It is recommended to call `await store.end();` before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.
### Standard usage[](#standard-usage "Direct link to Standard usage")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Below is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the `SingleStoreVectorStore`:
import { SingleStoreVectorStore } from "@langchain/community/vectorstores/singlestore";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};
#### API Reference:
* [SingleStoreVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_singlestore.SingleStoreVectorStore.html) from `@langchain/community/vectorstores/singlestore`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Metadata Filtering[](#metadata-filtering "Direct link to Metadata Filtering")
If it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:
import { SingleStoreVectorStore } from "@langchain/community/vectorstores/singlestore";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde!", "Até logo!"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};
#### API Reference:
* [SingleStoreVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_singlestore.SingleStoreVectorStore.html) from `@langchain/community/vectorstores/singlestore`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Rockset
](/v0.2/docs/integrations/vectorstores/rockset)[
Next
Supabase
](/v0.2/docs/integrations/vectorstores/supabase)
* [Setup](#setup)
* [Usage](#usage)
* [Standard usage](#standard-usage)
* [Metadata Filtering](#metadata-filtering)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/myscale | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* MyScale
On this page
MyScale
=======
Compatibility
Only available on Node.js.
[MyScale](https://myscale.com/) is an emerging AI database that harmonizes the power of vector search and SQL analytics, providing a managed, efficient, and responsive experience.
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Launch a cluster through [MyScale's Web Console](https://console.myscale.com/). See [MyScale's official documentation](https://docs.myscale.com/en/quickstart/) for more information.
2. After launching a cluster, view your `Connection Details` from your cluster's `Actions` menu. You will need the host, port, username, and password.
3. Install the required Node.js peer dependency in your workspace.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/openai @clickhouse/client @langchain/community
yarn add @langchain/openai @clickhouse/client @langchain/community
pnpm add @langchain/openai @clickhouse/client @langchain/community
Index and Query Docs[](#index-and-query-docs "Direct link to Index and Query Docs")
------------------------------------------------------------------------------------
import { MyScaleStore } from "@langchain/community/vectorstores/myscale";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MyScaleStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], new OpenAIEmbeddings(), { host: process.env.MYSCALE_HOST || "localhost", port: process.env.MYSCALE_PORT || "8443", username: process.env.MYSCALE_USERNAME || "username", password: process.env.MYSCALE_PASSWORD || "password", database: "default", // defaults to "default" table: "your_table", // defaults to "vector_table" });const results = await vectorStore.similaritySearch("hello world", 1);console.log(results);const filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [MyScaleStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_myscale.MyScaleStore.html) from `@langchain/community/vectorstores/myscale`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Query Docs From an Existing Collection[](#query-docs-from-an-existing-collection "Direct link to Query Docs From an Existing Collection")
------------------------------------------------------------------------------------------------------------------------------------------
import { MyScaleStore } from "@langchain/community/vectorstores/myscale";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MyScaleStore.fromExistingIndex( new OpenAIEmbeddings(), { host: process.env.MYSCALE_HOST || "localhost", port: process.env.MYSCALE_PORT || "8443", username: process.env.MYSCALE_USERNAME || "username", password: process.env.MYSCALE_PASSWORD || "password", database: "default", // defaults to "default" table: "your_table", // defaults to "vector_table" });const results = await vectorStore.similaritySearch("hello world", 1);console.log(results);const filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [MyScaleStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_myscale.MyScaleStore.html) from `@langchain/community/vectorstores/myscale`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
MongoDB Atlas
](/v0.2/docs/integrations/vectorstores/mongodb_atlas)[
Next
Neo4j Vector Index
](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Setup](#setup)
* [Index and Query Docs](#index-and-query-docs)
* [Query Docs From an Existing Collection](#query-docs-from-an-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/analyticdb | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* AnalyticDB
On this page
AnalyticDB
==========
[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
`AnalyticDB for PostgreSQL` is developed based on the open source `Greenplum Database` project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.
This notebook shows how to use functionality related to the `AnalyticDB` vector database.
To run, you should have an [AnalyticDB](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) instance up and running:
* Using [AnalyticDB Cloud Vector Database](https://www.alibabacloud.com/product/hybriddb-postgresql).
Compatibility
Only available on Node.js.
Setup[](#setup "Direct link to Setup")
---------------------------------------
LangChain.js accepts [node-postgres](https://node-postgres.com/) as the connections pool for AnalyticDB vectorstore.
* npm
* Yarn
* pnpm
npm install -S pg
yarn add pg
pnpm add pg
And we need [pg-copy-streams](https://github.com/brianc/node-pg-copy-streams) to add batch vectors quickly.
* npm
* Yarn
* pnpm
npm install -S pg-copy-streams
yarn add pg-copy-streams
pnpm add pg-copy-streams
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
Security
User-generated data such as usernames should not be used as input for the collection name.
**This may lead to SQL Injection!**
import { AnalyticDBVectorStore } from "@langchain/community/vectorstores/analyticdb";import { OpenAIEmbeddings } from "@langchain/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();
#### API Reference:
* [AnalyticDBVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_analyticdb.AnalyticDBVectorStore.html) from `@langchain/community/vectorstores/analyticdb`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Memory
](/v0.2/docs/integrations/vectorstores/memory)[
Next
Astra DB
](/v0.2/docs/integrations/vectorstores/astradb)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/clickhouse | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* ClickHouse
On this page
ClickHouse
==========
Compatibility
Only available on Node.js.
[ClickHouse](https://clickhouse.com/) is a robust and open-source columnar database that is used for handling analytical queries and efficient storage, ClickHouse is designed to provide a powerful combination of vector search and analytics.
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Launch a ClickHouse cluster. Refer to the [ClickHouse Installation Guide](https://clickhouse.com/docs/en/getting-started/install/) for details.
2. After launching a ClickHouse cluster, retrieve the `Connection Details` from the cluster's `Actions` menu. You will need the host, port, username, and password.
3. Install the required Node.js peer dependency for ClickHouse in your workspace.
You will need to install the following peer dependencies:
* npm
* Yarn
* pnpm
npm install -S @clickhouse/client mysql2
yarn add @clickhouse/client mysql2
pnpm add @clickhouse/client mysql2
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Index and Query Docs[](#index-and-query-docs "Direct link to Index and Query Docs")
------------------------------------------------------------------------------------
import { ClickHouseStore } from "@langchain/community/vectorstores/clickhouse";import { OpenAIEmbeddings } from "@langchain/openai";// Initialize ClickHouse store from textsconst vectorStore = await ClickHouseStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], new OpenAIEmbeddings(), { host: process.env.CLICKHOUSE_HOST || "localhost", port: process.env.CLICKHOUSE_PORT || 8443, username: process.env.CLICKHOUSE_USER || "username", password: process.env.CLICKHOUSE_PASSWORD || "password", database: process.env.CLICKHOUSE_DATABASE || "default", table: process.env.CLICKHOUSE_TABLE || "vector_table", });// Sleep 1 second to ensure that the search occurs after the successful insertion of data.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));// Perform similarity search without filteringconst results = await vectorStore.similaritySearch("hello world", 1);console.log(results);// Perform similarity search with filteringconst filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [ClickHouseStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_clickhouse.ClickHouseStore.html) from `@langchain/community/vectorstores/clickhouse`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Query Docs From an Existing Collection[](#query-docs-from-an-existing-collection "Direct link to Query Docs From an Existing Collection")
------------------------------------------------------------------------------------------------------------------------------------------
import { ClickHouseStore } from "@langchain/community/vectorstores/clickhouse";import { OpenAIEmbeddings } from "@langchain/openai";// Initialize ClickHouse storeconst vectorStore = await ClickHouseStore.fromExistingIndex( new OpenAIEmbeddings(), { host: process.env.CLICKHOUSE_HOST || "localhost", port: process.env.CLICKHOUSE_PORT || 8443, username: process.env.CLICKHOUSE_USER || "username", password: process.env.CLICKHOUSE_PASSWORD || "password", database: process.env.CLICKHOUSE_DATABASE || "default", table: process.env.CLICKHOUSE_TABLE || "vector_table", });// Sleep 1 second to ensure that the search occurs after the successful insertion of data.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));// Perform similarity search without filteringconst results = await vectorStore.similaritySearch("hello world", 1);console.log(results);// Perform similarity search with filteringconst filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [ClickHouseStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_clickhouse.ClickHouseStore.html) from `@langchain/community/vectorstores/clickhouse`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Chroma
](/v0.2/docs/integrations/vectorstores/chroma)[
Next
CloseVector
](/v0.2/docs/integrations/vectorstores/closevector)
* [Setup](#setup)
* [Index and Query Docs](#index-and-query-docs)
* [Query Docs From an Existing Collection](#query-docs-from-an-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/langsmith/walkthrough | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* [walkthrough](/v0.2/docs/langsmith/walkthrough)
* [🦜🕸️LangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Ecosystem
* [🦜🛠️ LangSmith](/v0.2/docs/langsmith/)
* walkthrough
walkthrough
===========
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
🦜🛠️ LangSmith
](/v0.2/docs/langsmith/)[
Next
🦜🕸️LangGraph.js
](/v0.2/docs/langgraph)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/platforms/openai | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Providers](/v0.2/docs/integrations/platforms/)
* OpenAI
On this page
OpenAI
======
All functionality related to OpenAI
> [OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory consisting of the non-profit `OpenAI Incorporated` and its for-profit subsidiary corporation `OpenAI Limited Partnership`. `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI. `OpenAI` systems run on an `Azure`\-based supercomputing platform from `Microsoft`.
> The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points.
>
> [ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
------------------------------------------------------------------------------------------
* Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
LLM[](#llm "Direct link to LLM")
---------------------------------
See a [usage example](/v0.2/docs/integrations/llms/openai).
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
Chat model[](#chat-model "Direct link to Chat model")
------------------------------------------------------
See a [usage example](/v0.2/docs/integrations/chat/openai).
import { ChatOpenAI } from "@langchain/openai";
Text Embedding Model[](#text-embedding-model "Direct link to Text Embedding Model")
------------------------------------------------------------------------------------
See a [usage example](/v0.2/docs/integrations/text_embedding/openai)
import { OpenAIEmbeddings } from "@langchain/openai";
Chain[](#chain "Direct link to Chain")
---------------------------------------
import { OpenAIModerationChain } from "langchain/chains";
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Microsoft
](/v0.2/docs/integrations/platforms/microsoft)[
Next
Components
](/v0.2/docs/integrations/components)
* [Installation and Setup](#installation-and-setup)
* [LLM](#llm)
* [Chat model](#chat-model)
* [Text Embedding Model](#text-embedding-model)
* [Chain](#chain)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/tools/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [ChatGPT Plugins](/v0.2/docs/integrations/tools/aiplugin-tool)
* [Connery Action Tool](/v0.2/docs/integrations/tools/connery)
* [Dall-E Tool](/v0.2/docs/integrations/tools/dalle)
* [Discord Tool](/v0.2/docs/integrations/tools/discord)
* [DuckDuckGoSearch](/v0.2/docs/integrations/tools/duckduckgo_search)
* [Exa Search](/v0.2/docs/integrations/tools/exa_search)
* [Gmail Tool](/v0.2/docs/integrations/tools/gmail)
* [Google Calendar Tool](/v0.2/docs/integrations/tools/google_calendar)
* [Google Places Tool](/v0.2/docs/integrations/tools/google_places)
* [Agent with AWS Lambda](/v0.2/docs/integrations/tools/lambda_agent)
* [Python interpreter tool](/v0.2/docs/integrations/tools/pyinterpreter)
* [SearchApi tool](/v0.2/docs/integrations/tools/searchapi)
* [Searxng Search tool](/v0.2/docs/integrations/tools/searxng)
* [StackExchange Tool](/v0.2/docs/integrations/tools/stackexchange)
* [Tavily Search](/v0.2/docs/integrations/tools/tavily_search)
* [Web Browser Tool](/v0.2/docs/integrations/tools/webbrowser)
* [Wikipedia tool](/v0.2/docs/integrations/tools/wikipedia)
* [WolframAlpha Tool](/v0.2/docs/integrations/tools/wolframalpha)
* [Agent with Zapier NLA Integration](/v0.2/docs/integrations/tools/zapier_agent)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Tools
Tools
=====
[
📄️ ChatGPT Plugins
-------------------
This example shows how to use ChatGPT Plugins within LangChain abstractions.
](/v0.2/docs/integrations/tools/aiplugin-tool)
[
📄️ Connery Action Tool
-----------------------
Using this tool, you can integrate individual Connery Action into your LangChain agent.
](/v0.2/docs/integrations/tools/connery)
[
📄️ Dall-E Tool
---------------
The Dall-E tool allows your agent to create images using OpenAI's Dall-E image generation tool.
](/v0.2/docs/integrations/tools/dalle)
[
📄️ Discord Tool
----------------
The Discord Tool gives your agent the ability to search, read, and write messages to discord channels.
](/v0.2/docs/integrations/tools/discord)
[
📄️ DuckDuckGoSearch
--------------------
DuckDuckGoSearch offers a privacy-focused search API designed for LLM Agents. It provides seamless integration with a wide range of data sources, prioritizing user privacy and relevant search results.
](/v0.2/docs/integrations/tools/duckduckgo_search)
[
📄️ Exa Search
--------------
Exa (formerly Metaphor Search) is a search engine fully designed for use by LLMs. Search for documents on the internet using natural language queries, then retrieve cleaned HTML content from desired documents.
](/v0.2/docs/integrations/tools/exa_search)
[
📄️ Gmail Tool
--------------
The Gmail Tool allows your agent to create and view messages from a linked email account.
](/v0.2/docs/integrations/tools/gmail)
[
📄️ Google Calendar Tool
------------------------
The Google Calendar Tools allow your agent to create and view Google Calendar events from a linked calendar.
](/v0.2/docs/integrations/tools/google_calendar)
[
📄️ Google Places Tool
----------------------
The Google Places Tool allows your agent to utilize the Google Places API in order to find addresses,
](/v0.2/docs/integrations/tools/google_places)
[
📄️ Agent with AWS Lambda
-------------------------
Full docs here//docs.aws.amazon.com/lambda/index.html
](/v0.2/docs/integrations/tools/lambda_agent)
[
📄️ Python interpreter tool
---------------------------
This tool executes code and can potentially perform destructive actions. Be careful that you trust any code passed to it!
](/v0.2/docs/integrations/tools/pyinterpreter)
[
📄️ SearchApi tool
------------------
The SearchApi tool connects your agents and chains to the internet.
](/v0.2/docs/integrations/tools/searchapi)
[
📄️ Searxng Search tool
-----------------------
The SearxngSearch tool connects your agents and chains to the internet.
](/v0.2/docs/integrations/tools/searxng)
[
📄️ StackExchange Tool
----------------------
The StackExchange tool connects your agents and chains to StackExchange's API.
](/v0.2/docs/integrations/tools/stackexchange)
[
📄️ Tavily Search
-----------------
Tavily Search is a robust search API tailored specifically for LLM Agents. It seamlessly integrates with diverse data sources to ensure a superior, relevant search experience.
](/v0.2/docs/integrations/tools/tavily_search)
[
📄️ Web Browser Tool
--------------------
The Webbrowser Tool gives your agent the ability to visit a website and extract information. It is described to the agent as
](/v0.2/docs/integrations/tools/webbrowser)
[
📄️ Wikipedia tool
------------------
The WikipediaQueryRun tool connects your agents and chains to Wikipedia.
](/v0.2/docs/integrations/tools/wikipedia)
[
📄️ WolframAlpha Tool
---------------------
The WolframAlpha tool connects your agents and chains to WolframAlpha's state-of-the-art computational intelligence engine.
](/v0.2/docs/integrations/tools/wolframalpha)
[
📄️ Agent with Zapier NLA Integration
-------------------------------------
This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later.
](/v0.2/docs/integrations/tools/zapier_agent)
[
Previous
Zep Retriever
](/v0.2/docs/integrations/retrievers/zep-retriever)[
Next
ChatGPT Plugins
](/v0.2/docs/integrations/tools/aiplugin-tool)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/platforms/anthropic | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Providers](/v0.2/docs/integrations/platforms/)
* Anthropic
On this page
Anthropic
=========
All functionality related to Anthropic models.
[Anthropic](https://www.anthropic.com/) is an AI safety and research company, and is the creator of Claude. This page covers all integrations between Anthropic models and LangChain.
Prompting Best Practices[](#prompting-best-practices "Direct link to Prompting Best Practices")
------------------------------------------------------------------------------------------------
Anthropic models have several prompting best practices compared to OpenAI models.
**System Messages may only be the first message**
Anthropic models require any system messages to be the first one in your prompts.
`ChatAnthropic`[](#chatanthropic "Direct link to chatanthropic")
-----------------------------------------------------------------
`ChatAnthropic` is a subclass of LangChain's `ChatModel`, meaning it works best with `ChatPromptTemplate`. You can import this wrapper with the following code:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({});
When working with ChatModels, it is preferred that you design your prompts as `ChatPromptTemplate`s. Here is an example below of doing that:
import { ChatPromptTemplate } from "langchain/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful chatbot"], ["human", "Tell me a joke about {topic}"],]);
You can then use this in a chain as follows:
const chain = prompt.pipe(model);await chain.invoke({ topic: "bears" });
See the [chat model integration page](/v0.2/docs/integrations/chat/anthropic/) for more examples, including multimodal inputs.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Providers
](/v0.2/docs/integrations/platforms/)[
Next
AWS
](/v0.2/docs/integrations/platforms/aws)
* [Prompting Best Practices](#prompting-best-practices)
* [`ChatAnthropic`](#chatanthropic)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/retrievers/kendra-retriever/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)
* [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever)
* [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin)
* [Dria Retriever](/v0.2/docs/integrations/retrievers/dria)
* [Exa Search](/v0.2/docs/integrations/retrievers/exa)
* [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde)
* [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever)
* [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever)
* [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid)
* [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily)
* [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever)
* [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore)
* [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever)
* [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* Amazon Kendra Retriever
Amazon Kendra Retriever
=======================
Amazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.
With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.
Setup[](#setup "Direct link to Setup")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm i @aws-sdk/client-kendra @langchain/community
yarn add @aws-sdk/client-kendra @langchain/community
pnpm add @aws-sdk/client-kendra @langchain/community
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { AmazonKendraRetriever } from "@langchain/community/retrievers/amazon_kendra";const retriever = new AmazonKendraRetriever({ topK: 10, indexId: "YOUR_INDEX_ID", region: "us-east-2", // Your region clientOptions: { credentials: { accessKeyId: "YOUR_ACCESS_KEY_ID", secretAccessKey: "YOUR_SECRET_ACCESS_KEY", }, },});const docs = await retriever.invoke("How are clouds formed?");console.log(docs);
#### API Reference:
* [AmazonKendraRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_amazon_kendra.AmazonKendraRetriever.html) from `@langchain/community/retrievers/amazon_kendra`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
HyDE Retriever
](/v0.2/docs/integrations/retrievers/hyde)[
Next
Metal Retriever
](/v0.2/docs/integrations/retrievers/metal-retriever)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/retrievers/exa/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)
* [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever)
* [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin)
* [Dria Retriever](/v0.2/docs/integrations/retrievers/dria)
* [Exa Search](/v0.2/docs/integrations/retrievers/exa)
* [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde)
* [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever)
* [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever)
* [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid)
* [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily)
* [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever)
* [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore)
* [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever)
* [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* Exa Search
On this page
Exa Search
==========
The Exa Search API provides a new search experience designed for LLMs.
Usage[](#usage "Direct link to Usage")
---------------------------------------
First, install the LangChain integration package for Exa:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/exa
yarn add @langchain/exa
pnpm add @langchain/exa
You'll need to set your API key as an environment variable.
The `Exa` class defaults to `EXASEARCH_API_KEY` when searching for your API key.
import { ExaRetriever } from "@langchain/exa";import Exa from "exa-js";const retriever = new ExaRetriever({ // @ts-expect-error Some TS Config's will cause this to give a TypeScript error, even though it works. client: new Exa( process.env.EXASEARCH_API_KEY // default API key ),});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer in the 2022 State of the Union?");console.log(retrievedDocs);/*[ Document { pageContent: undefined, metadata: { title: '2022 State of the Union Address | The White House', url: 'https://www.whitehouse.gov/state-of-the-union-2022/', publishedDate: '2022-02-25', author: null, id: 'SW3SLghgYTLQKnqBC-6ftQ', score: 0.163949653506279 } }, Document { pageContent: undefined, metadata: { title: "Read: Justice Stephen Breyer's White House remarks after announcing his retirement | CNN Politics", url: 'https://www.cnn.com/2022/01/27/politics/transcript-stephen-breyer-retirement-remarks/index.html', publishedDate: '2022-01-27', author: 'CNN', id: 'rIeqmU1L9sd28wGrqefRPA', score: 0.1638609766960144 } }, Document { pageContent: undefined, metadata: { title: 'Sunday, January 22, 2023 - How Appealing', url: 'https://howappealing.abovethelaw.com/2023/01/22/', publishedDate: '2023-01-22', author: null, id: 'aubLpkpZWoQSN-he-hwtRg', score: 0.15869899094104767 } }, Document { pageContent: undefined, metadata: { title: "Noting Past Divisions Retiring Justice Breyer Says It's Up to Future Generations to Make American Experiment Work", url: 'https://www.c-span.org/video/?517531-1/noting-past-divisions-retiring-justice-breyer-future-generations-make-american-experiment-work', publishedDate: '2022-01-27', author: null, id: '8pNk76nbao23bryEMD0u5g', score: 0.15786601603031158 } }, Document { pageContent: undefined, metadata: { title: 'Monday, January 24, 2022 - How Appealing', url: 'https://howappealing.abovethelaw.com/2022/01/24/', publishedDate: '2022-01-24', author: null, id: 'pt6xlioR4bdm8kSJUQoyPA', score: 0.1542145311832428 } }, Document { pageContent: undefined, metadata: { title: "Full transcript of Biden's State of the Union address", url: 'https://www.axios.com/2023/02/08/sotu-2023-biden-transcript?utm_source=twitter&utm_medium=social&utm_campaign=editorial&utm_content=politics', publishedDate: '2023-02-08', author: 'Axios', id: 'Dg5JepEwPwAMjgnSA_Z_NA', score: 0.15383175015449524 } }, Document { pageContent: undefined, metadata: { title: "Read Justice Breyer's remarks on retiring and his hope in the American 'experiment'", url: 'https://www.npr.org/2022/01/27/1076162088/read-stephen-breyer-retirement-supreme-court', publishedDate: '2022-01-27', author: 'NPR Staff', id: 'WDKA1biLMREo3BsOs95SIw', score: 0.14877735078334808 } }, Document { pageContent: undefined, metadata: { title: 'Grading My 2021 Predictions', url: 'https://astralcodexten.substack.com/p/grading-my-2021-predictions', publishedDate: '2022-01-24', author: 'Scott Alexander', id: 'jPutj4IcqgAiKSs6-eqv3g', score: 0.14813132584095 } }, Document { pageContent: undefined, metadata: { title: '', url: 'https://www.supremecourt.gov/oral_arguments/argument_transcripts/2021/21a240_l537.pdf', author: null, id: 'p97vY-5yvA2kBB9nl-7B3A', score: 0.14450226724147797 } }, Document { pageContent: undefined, metadata: { title: 'Remarks by President Biden at a Political Event | Charleston, SC', url: 'https://www.whitehouse.gov/briefing-room/speeches-remarks/2024/01/08/remarks-by-president-biden-at-a-political-event-charleston-sc/', publishedDate: '2024-01-08', author: 'The White House', id: 'ZdPbaacRn8bgwDWv_aA6zg', score: 0.14446410536766052 } }]*/
#### API Reference:
* [ExaRetriever](https://v02.api.js.langchain.com/classes/langchain_exa.ExaRetriever.html) from `@langchain/exa`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Dria Retriever
](/v0.2/docs/integrations/retrievers/dria)[
Next
HyDE Retriever
](/v0.2/docs/integrations/retrievers/hyde)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/agents/agent_types/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [Quick start](/v0.1/docs/modules/agents/quick_start/)
* [Concepts](/v0.1/docs/modules/agents/concepts/)
* [Agent Types](/v0.1/docs/modules/agents/agent_types/)
* [Tool calling](/v0.1/docs/modules/agents/agent_types/tool_calling/)
* [OpenAI tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/)
* [OpenAI functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/)
* [XML Agent](/v0.1/docs/modules/agents/agent_types/xml/)
* [Conversational](/v0.1/docs/modules/agents/agent_types/chat_conversation_agent/)
* [OpenAI Assistant](/v0.1/docs/modules/agents/agent_types/openai_assistant/)
* [Plan and execute](/v0.1/docs/modules/agents/agent_types/plan_and_execute/)
* [ReAct](/v0.1/docs/modules/agents/agent_types/react/)
* [Structured chat](/v0.1/docs/modules/agents/agent_types/structured_chat/)
* [XML Agent](/v0.1/docs/modules/agents/agent_types/xml_legacy/)
* [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/)
* [Agents](/v0.1/docs/modules/agents/)
* [Tools](/v0.1/docs/modules/agents/tools/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Agents](/v0.1/docs/modules/agents/)
* Agent Types
Agent Types
===========
This categorizes all the available agents along a few dimensions.
**Intended Model Type**
Whether this agent is intended for Chat Models (takes in messages, outputs message) or LLMs (takes in string, outputs string). The main thing this affects is the prompting strategy used. You can use an agent with a different type of model than it is intended for, but it likely won't produce results of the same quality.
**Supports Chat History**
Whether or not these agent types support chat history. If it does, that means it can be used as a chatbot. If it does not, then that means it's more suited for single tasks. Supporting chat history generally requires better models, so earlier agent types aimed at worse models may not support it.
**Supports Multi-Input Tools**
Whether or not these agent types support tools with multiple inputs. If a tool only requires a single input, it is generally easier for an LLM to know how to invoke it. Therefore, several earlier agent types aimed at worse models may not support them.
**Supports Parallel Function Calling**
Having an LLM call multiple tools at the same time can greatly speed up agents whether there are tasks that are assisted by doing so. However, it is much more challenging for LLMs to do this, so some agent types do not support this.
**Required Model Params**
Whether this agent requires the model to support any additional parameters. Some agent types take advantage of things like OpenAI function calling, which require other model parameters. If none are required, then that means that everything is done via prompting.
**When to Use**
Our commentary on when you should consider using this agent type.
Agent Type
Intended Model Type
Supports Chat History
Supports Multi-Input Tools
Supports Parallel Function Calling
Required Model Params
When to Use
[OpenAI Tools](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/)
Chat
✅
✅
✅
`tools`
If you are using a recent OpenAI model (`1106` onwards)
[OpenAI Functions](/v0.1/docs/modules/agents/agent_types/openai_functions_agent/)
Chat
✅
✅
`functions`
If you are using an OpenAI model, or an open-source model that has been finetuned for function calling and exposes the same `functions` parameters as OpenAI
[XML](/v0.1/docs/modules/agents/agent_types/xml/)
LLM
✅
If you are using Anthropic models, or other models good at XML
[Structured Chat](/v0.1/docs/modules/agents/agent_types/structured_chat/)
Chat
✅
✅
If you need to support tools with multiple inputs and are using a model that does not support function calling
[ReAct](/v0.1/docs/modules/agents/agent_types/react/)
LLM
✅
If you are using a simpler model
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Concepts
](/v0.1/docs/modules/agents/concepts/)[
Next
Tool calling
](/v0.1/docs/modules/agents/agent_types/tool_calling/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/toolkits/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Connery Toolkit](/v0.2/docs/integrations/toolkits/connery)
* [JSON Agent Toolkit](/v0.2/docs/integrations/toolkits/json)
* [OpenAPI Agent Toolkit](/v0.2/docs/integrations/toolkits/openapi)
* [AWS Step Functions Toolkit](/v0.2/docs/integrations/toolkits/sfn_agent)
* [SQL Agent Toolkit](/v0.2/docs/integrations/toolkits/sql)
* [VectorStore Agent Toolkit](/v0.2/docs/integrations/toolkits/vectorstore)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Toolkits
Toolkits
========
[
📄️ Connery Toolkit
-------------------
Using this toolkit, you can integrate Connery Actions into your LangChain agent.
](/v0.2/docs/integrations/toolkits/connery)
[
📄️ JSON Agent Toolkit
----------------------
This example shows how to load and use an agent with a JSON toolkit.
](/v0.2/docs/integrations/toolkits/json)
[
📄️ OpenAPI Agent Toolkit
-------------------------
This example shows how to load and use an agent with a OpenAPI toolkit.
](/v0.2/docs/integrations/toolkits/openapi)
[
📄️ AWS Step Functions Toolkit
------------------------------
AWS Step Functions are a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
](/v0.2/docs/integrations/toolkits/sfn_agent)
[
📄️ SQL Agent Toolkit
---------------------
This example shows how to load and use an agent with a SQL toolkit.
](/v0.2/docs/integrations/toolkits/sql)
[
📄️ VectorStore Agent Toolkit
-----------------------------
This example shows how to load and use an agent with a vectorstore toolkit.
](/v0.2/docs/integrations/toolkits/vectorstore)
[
Previous
Agent with Zapier NLA Integration
](/v0.2/docs/integrations/tools/zapier_agent)[
Next
Connery Toolkit
](/v0.2/docs/integrations/toolkits/connery)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/docs/get_started/installation#installing-integration-packages | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Get started](/v0.1/docs/get_started/)
* Installation
On this page
Installation
============
Supported Environments[](#supported-environments "Direct link to Supported Environments")
------------------------------------------------------------------------------------------
LangChain is written in TypeScript and can be used in:
* Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x
* Cloudflare Workers
* Vercel / Next.js (Browser, Serverless and Edge functions)
* Supabase Edge Functions
* Browser
* Deno
* Bun
However, note that individual integrations may not be supported in all environments.
Installation[](#installation-1 "Direct link to Installation")
--------------------------------------------------------------
To get started, install LangChain with the following command:
* npm
* Yarn
* pnpm
npm install -S langchain
yarn add langchain
pnpm add langchain
### TypeScript[](#typescript "Direct link to TypeScript")
LangChain is written in TypeScript and provides type definitions for all of its public APIs.
Installing integration packages[](#installing-integration-packages "Direct link to Installing integration packages")
---------------------------------------------------------------------------------------------------------------------
LangChain supports packages that contain specific module integrations with third-party providers. They can be as specific as [`@langchain/google-genai`](/v0.1/docs/integrations/platforms/google/#chatgooglegenerativeai), which contains integrations just for Google AI Studio models, or as broad as [`@langchain/community`](https://www.npmjs.com/package/@langchain/community), which contains broader variety of community contributed integrations.
These packages, as well as the main LangChain package, all depend on [`@langchain/core`](https://www.npmjs.com/package/@langchain/core), which contains the base abstractions that these integration packages extend.
To ensure that all integrations and their types interact with each other properly, it is important that they all use the same version of `@langchain/core`. The best way to guarantee this is to add a `"resolutions"` or `"overrides"` field like the following in your project's `package.json`. The name will depend on your package manager:
tip
The `resolutions` or `pnpm.overrides` fields for `yarn` or `pnpm` must be set in the root `package.json` file.
If you are using `yarn`:
yarn package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/google-genai": "^0.0.2", "langchain": "0.0.207" }, "resolutions": { "@langchain/core": "0.1.5" }}
Or for `npm`:
npm package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/google-genai": "^0.0.2", "langchain": "0.0.207" }, "overrides": { "@langchain/core": "0.1.5" }}
Or for `pnpm`:
pnpm package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/google-genai": "^0.0.2", "langchain": "0.0.207" }, "pnpm": { "overrides": { "@langchain/core": "0.1.5" } }}
### @langchain/community[](#langchaincommunity "Direct link to @langchain/community")
The [@langchain/community](https://www.npmjs.com/package/@langchain/community) package contains third-party integrations. It is automatically installed along with `langchain`, but can also be used separately with just `@langchain/core`. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
### @langchain/core[](#langchaincore "Direct link to @langchain/core")
The [@langchain/core](https://www.npmjs.com/package/@langchain/core) package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed along with `langchain`, but can also be used separately. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
Loading the library[](#loading-the-library "Direct link to Loading the library")
---------------------------------------------------------------------------------
### ESM[](#esm "Direct link to ESM")
LangChain provides an ESM build targeting Node.js environments. You can import it using the following syntax:
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
If you are using TypeScript in an ESM project we suggest updating your `tsconfig.json` to include the following:
tsconfig.json
{ "compilerOptions": { ... "target": "ES2020", // or higher "module": "nodenext", }}
### CommonJS[](#commonjs "Direct link to CommonJS")
LangChain provides a CommonJS build targeting Node.js environments. You can import it using the following syntax:
const { OpenAI } = require("@langchain/openai");
### Cloudflare Workers[](#cloudflare-workers "Direct link to Cloudflare Workers")
LangChain can be used in Cloudflare Workers. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
### Vercel / Next.js[](#vercel--nextjs "Direct link to Vercel / Next.js")
LangChain can be used in Vercel / Next.js. We support using LangChain in frontend components, in Serverless functions and in Edge functions. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
### Deno / Supabase Edge Functions[](#deno--supabase-edge-functions "Direct link to Deno / Supabase Edge Functions")
LangChain can be used in Deno / Supabase Edge Functions. You can import it using the following syntax:
import { OpenAI } from "https://esm.sh/@langchain/openai";
or
import { OpenAI } from "npm:@langchain/openai";
We recommend looking at our [Supabase Template](https://github.com/langchain-ai/langchain-template-supabase) for an example of how to use LangChain in Supabase Edge Functions.
### Browser[](#browser "Direct link to Browser")
LangChain can be used in the browser. In our CI we test bundling LangChain with Webpack and Vite, but other bundlers should work too. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
Unsupported: Node.js 16[](#unsupported-nodejs-16 "Direct link to Unsupported: Node.js 16")
-------------------------------------------------------------------------------------------
We do not support Node.js 16, but if you still want to run LangChain on Node.js 16, you will need to follow the instructions in this section. We do not guarantee that these instructions will continue to work in the future.
You will have to make `fetch` available globally, either:
* run your application with `NODE_OPTIONS='--experimental-fetch' node ...`, or
* install `node-fetch` and follow the instructions [here](https://github.com/node-fetch/node-fetch#providing-global-access)
You'll also need to [polyfill `ReadableStream`](https://www.npmjs.com/package/web-streams-polyfill) by installing:
* npm
* Yarn
* pnpm
npm i web-streams-polyfill
yarn add web-streams-polyfill
pnpm add web-streams-polyfill
And then adding it to the global namespace in your main entrypoint:
import "web-streams-polyfill/es6";
Additionally you'll have to polyfill `structuredClone`, eg. by installing `core-js` and following the instructions [here](https://github.com/zloirock/core-js).
If you are running Node.js 18+, you do not need to do anything.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Introduction
](/v0.1/docs/get_started/introduction/)[
Next
Quickstart
](/v0.1/docs/get_started/quickstart/)
* [Supported Environments](#supported-environments)
* [Installation](#installation-1)
* [TypeScript](#typescript)
* [Installing integration packages](#installing-integration-packages)
* [@langchain/community](#langchaincommunity)
* [@langchain/core](#langchaincore)
* [Loading the library](#loading-the-library)
* [ESM](#esm)
* [CommonJS](#commonjs)
* [Cloudflare Workers](#cloudflare-workers)
* [Vercel / Next.js](#vercel--nextjs)
* [Deno / Supabase Edge Functions](#deno--supabase-edge-functions)
* [Browser](#browser)
* [Unsupported: Node.js 16](#unsupported-nodejs-16)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/retrievers/tavily | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)
* [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever)
* [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin)
* [Dria Retriever](/v0.2/docs/integrations/retrievers/dria)
* [Exa Search](/v0.2/docs/integrations/retrievers/exa)
* [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde)
* [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever)
* [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever)
* [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid)
* [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily)
* [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever)
* [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore)
* [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever)
* [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* Tavily Search API
On this page
Tavily Search API
=================
[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
You will need to populate a `TAVILY_API_KEY` environment variable with your Tavily API key or pass it into the constructor as `apiKey`.
For a full list of allowed arguments, see [the official documentation](https://app.tavily.com/documentation/api). You can also pass any param to the SDK via a `kwargs` object.
import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";const retriever = new TavilySearchAPIRetriever({ k: 3,});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer in the 2022 State of the Union?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: `Shy Justice Br eyer. During his remarks, the president paid tribute to retiring Supreme Court Justice Stephen Breyer. "Tonight, I'd like to honor someone who dedicated his life to...`, metadata: [Object] }, Document { pageContent: 'Fact Check. Ukraine. 56 Posts. Sort by. 10:16 p.m. ET, March 1, 2022. Biden recognized outgoing Supreme Court Justice Breyer during his speech. President Biden recognized outgoing...', metadata: [Object] }, Document { pageContent: `In his State of the Union address on March 1, Biden thanked Breyer for his service. "I'd like to honor someone who has dedicated his life to serve this country: Justice Breyer — an Army...`, metadata: [Object] } ] }*/
#### API Reference:
* [TavilySearchAPIRetriever](https://v02.api.js.langchain.com/classes/langchain_community_retrievers_tavily_search_api.TavilySearchAPIRetriever.html) from `@langchain/community/retrievers/tavily_search_api`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Supabase Hybrid Search
](/v0.2/docs/integrations/retrievers/supabase-hybrid)[
Next
Time-Weighted Retriever
](/v0.2/docs/integrations/retrievers/time-weighted-retriever)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/pgvector | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* PGVector
On this page
PGVector
========
To enable vector search in a generic PostgreSQL database, LangChain.js supports using the [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To work with PGVector, you need to install the `pg` package:
* npm
* Yarn
* pnpm
npm install pg
yarn add pg
pnpm add pg
### Setup a `pgvector` self hosted instance with `docker-compose`[](#setup-a-pgvector-self-hosted-instance-with-docker-compose "Direct link to setup-a-pgvector-self-hosted-instance-with-docker-compose")
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
`pgvector` provides a prebuilt Docker image that can be used to quickly setup a self-hosted Postgres instance. Create a file below named `docker-compose.yml`:
# Run this command to start the database:# docker-compose up --buildversion: "3"services: db: hostname: 127.0.0.1 image: ankane/pgvector ports: - 5432:5432 restart: always environment: - POSTGRES_DB=api - POSTGRES_USER=myuser - POSTGRES_PASSWORD=ChangeMe volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql
And then in the same directory, run `docker compose up` to start the container.
You can find more information on how to setup `pgvector` in the [official repository](https://github.com/pgvector/pgvector).
Usage[](#usage "Direct link to Usage")
---------------------------------------
Security
User-generated data such as usernames should not be used as input for table and column names.
**This may lead to SQL Injection!**
One complete example of using `PGVectorStore` is the following:
import { OpenAIEmbeddings } from "@langchain/openai";import { DistanceStrategy, PGVectorStore,} from "@langchain/community/vectorstores/pgvector";import { PoolConfig } from "pg";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst config = { postgresConnectionOptions: { type: "postgres", host: "127.0.0.1", port: 5433, user: "myuser", password: "ChangeMe", database: "api", } as PoolConfig, tableName: "testlangchain", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", }, // supported distance strategies: cosine (default), innerProduct, or euclidean distanceStrategy: "cosine" as DistanceStrategy,};const pgvectorStore = await PGVectorStore.initialize( new OpenAIEmbeddings(), config);await pgvectorStore.addDocuments([ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },]);const results = await pgvectorStore.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 1 } } ]*/// Filtering is supportedconst results2 = await pgvectorStore.similaritySearch("water", 1, { a: 2,});console.log(results2);/* [ Document { pageContent: 'what's this', metadata: { a: 2 } } ]*/// Filtering on multiple values using "in" is supported tooconst results3 = await pgvectorStore.similaritySearch("water", 1, { a: { in: [2], },});console.log(results3);/* [ Document { pageContent: 'what's this', metadata: { a: 2 } } ]*/await pgvectorStore.delete({ filter: { a: 1, },});const results4 = await pgvectorStore.similaritySearch("water", 1);console.log(results4);/* [ Document { pageContent: 'what's this', metadata: { a: 2 } } ]*/await pgvectorStore.end();
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [DistanceStrategy](https://v02.api.js.langchain.com/types/langchain_community_vectorstores_pgvector.DistanceStrategy.html) from `@langchain/community/vectorstores/pgvector`
* [PGVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector`
You can also specify a `collectionTableName` and a `collectionName` to partition vectors between multiple users or namespaces.
### Advanced: reusing connections[](#advanced-reusing-connections "Direct link to Advanced: reusing connections")
You can reuse connections by creating a pool, then creating new `PGVectorStore` instances directly via the constructor.
Note that you should call `.initialize()` to set up your database at least once to set up your tables properly before using the constructor.
import { OpenAIEmbeddings } from "@langchain/openai";import { PGVectorStore } from "@langchain/community/vectorstores/pgvector";import pg from "pg";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/pgvectorconst reusablePool = new pg.Pool({ host: "127.0.0.1", port: 5433, user: "myuser", password: "ChangeMe", database: "api",});const originalConfig = { pool: reusablePool, tableName: "testlangchain", collectionName: "sample", collectionTableName: "collections", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },};// Set up the DB.// Can skip this step if you've already initialized the DB.// await PGVectorStore.initialize(new OpenAIEmbeddings(), originalConfig);const pgvectorStore = new PGVectorStore(new OpenAIEmbeddings(), originalConfig);await pgvectorStore.addDocuments([ { pageContent: "what's this", metadata: { a: 2 } }, { pageContent: "Cat drinks milk", metadata: { a: 1 } },]);const results = await pgvectorStore.similaritySearch("water", 1);console.log(results);/* [ Document { pageContent: 'Cat drinks milk', metadata: { a: 1 } } ]*/const pgvectorStore2 = new PGVectorStore(new OpenAIEmbeddings(), { pool: reusablePool, tableName: "testlangchain", collectionTableName: "collections", collectionName: "some_other_collection", columns: { idColumnName: "id", vectorColumnName: "vector", contentColumnName: "content", metadataColumnName: "metadata", },});const results2 = await pgvectorStore2.similaritySearch("water", 1);console.log(results2);/* []*/await reusablePool.end();
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PGVectorStore](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_pgvector.PGVectorStore.html) from `@langchain/community/vectorstores/pgvector`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
OpenSearch
](/v0.2/docs/integrations/vectorstores/opensearch)[
Next
Pinecone
](/v0.2/docs/integrations/vectorstores/pinecone)
* [Setup](#setup)
* [Setup a `pgvector` self hosted instance with `docker-compose`](#setup-a-pgvector-self-hosted-instance-with-docker-compose)
* [Usage](#usage)
* [Advanced: reusing connections](#advanced-reusing-connections)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/cloudflare_vectorize | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Cloudflare Vectorize
Cloudflare Vectorize
====================
If you're deploying your project in a Cloudflare worker, you can use [Cloudflare Vectorize](https://developers.cloudflare.com/vectorize/) with LangChain.js. It's a powerful and convenient option that's built directly into Cloudflare.
Setup[](#setup "Direct link to Setup")
---------------------------------------
Compatibility
Cloudflare Vectorize is currently in open beta, and requires a Cloudflare account on a paid plan to use.
After [setting up your project](https://developers.cloudflare.com/vectorize/get-started/intro/#prerequisites), create an index by running the following Wrangler command:
$ npx wrangler vectorize create <index_name> --preset @cf/baai/bge-small-en-v1.5
You can see a full list of options for the `vectorize` command [in the official documentation](https://developers.cloudflare.com/workers/wrangler/commands/#vectorize).
You'll then need to update your `wrangler.toml` file to include an entry for `[[vectorize]]`:
[[vectorize]]binding = "VECTORIZE_INDEX"index_name = "<index_name>"
Finally, you'll need to install the LangChain Cloudflare integration package:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/cloudflare
yarn add @langchain/cloudflare
pnpm add @langchain/cloudflare
Usage[](#usage "Direct link to Usage")
---------------------------------------
Below is an example worker that adds documents to a vectorstore, queries it, or clears it depending on the path used. It also uses [Cloudflare Workers AI Embeddings](/v0.2/docs/integrations/text_embedding/cloudflare_ai).
note
If running locally, be sure to run wrangler as `npx wrangler dev --remote`!
name = "langchain-test"main = "worker.ts"compatibility_date = "2024-01-10"[[vectorize]]binding = "VECTORIZE_INDEX"index_name = "langchain-test"[ai]binding = "AI"
// @ts-nocheckimport type { VectorizeIndex, Fetcher, Request,} from "@cloudflare/workers-types";import { CloudflareVectorizeStore, CloudflareWorkersAIEmbeddings,} from "@langchain/cloudflare";export interface Env { VECTORIZE_INDEX: VectorizeIndex; AI: Fetcher;}export default { async fetch(request: Request, env: Env) { const { pathname } = new URL(request.url); const embeddings = new CloudflareWorkersAIEmbeddings({ binding: env.AI, model: "@cf/baai/bge-small-en-v1.5", }); const store = new CloudflareVectorizeStore(embeddings, { index: env.VECTORIZE_INDEX, }); if (pathname === "/") { const results = await store.similaritySearch("hello", 5); return Response.json(results); } else if (pathname === "/load") { // Upsertion by id is supported await store.addDocuments( [ { pageContent: "hello", metadata: {}, }, { pageContent: "world", metadata: {}, }, { pageContent: "hi", metadata: {}, }, ], { ids: ["id1", "id2", "id3"] } ); return Response.json({ success: true }); } else if (pathname === "/clear") { await store.delete({ ids: ["id1", "id2", "id3"] }); return Response.json({ success: true }); } return Response.json({ error: "Not Found" }, { status: 404 }); },};
#### API Reference:
* [CloudflareVectorizeStore](https://v02.api.js.langchain.com/classes/langchain_cloudflare.CloudflareVectorizeStore.html) from `@langchain/cloudflare`
* [CloudflareWorkersAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cloudflare.CloudflareWorkersAIEmbeddings.html) from `@langchain/cloudflare`
You can also pass a `filter` parameter to filter by previously loaded metadata. See [the official documentation](https://developers.cloudflare.com/vectorize/learning/metadata-filtering/) for information on the required format.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
CloseVector
](/v0.2/docs/integrations/vectorstores/closevector)[
Next
Convex
](/v0.2/docs/integrations/vectorstores/convex)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/elasticsearch | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Elasticsearch
On this page
Elasticsearch
=============
Compatibility
Only available on Node.js.
[Elasticsearch](https://github.com/elastic/elasticsearch) is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads. It supports also vector search using the [k-nearest neighbor](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) (kNN) algorithm and also [custom models for Natural Language Processing](https://www.elastic.co/blog/how-to-deploy-nlp-text-embeddings-and-vector-search) (NLP). You can read more about the support of vector search in Elasticsearch [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html).
LangChain.js accepts [@elastic/elasticsearch](https://github.com/elastic/elasticsearch-js) as the client for Elasticsearch vectorstore.
Setup[](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install -S @elastic/elasticsearch
yarn add @elastic/elasticsearch
pnpm add @elastic/elasticsearch
You'll also need to have an Elasticsearch instance running. You can use the [official Docker image](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) to get started, or you can use [Elastic Cloud](https://www.elastic.co/cloud/), Elastic's official cloud service.
For connecting to Elastic Cloud you can read the documentation reported [here](https://www.elastic.co/guide/en/kibana/current/api-keys.html) for obtaining an API key.
Example: index docs, vector search and LLM integration[](#example-index-docs-vector-search-and-llm-integration "Direct link to Example: index docs, vector search and LLM integration")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Below is an example that indexes 4 documents in Elasticsearch, runs a vector search query, and finally uses an LLM to answer a question in natural language based on the retrieved documents.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { Client, ClientOptions } from "@elastic/elasticsearch";import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { VectorDBQAChain } from "langchain/chains";import { ElasticClientArgs, ElasticVectorSearch,} from "@langchain/community/vectorstores/elasticsearch";import { Document } from "@langchain/core/documents";// to run this first run Elastic's docker-container with `docker-compose up -d --build`export async function run() { const config: ClientOptions = { node: process.env.ELASTIC_URL ?? "http://127.0.0.1:9200", }; if (process.env.ELASTIC_API_KEY) { config.auth = { apiKey: process.env.ELASTIC_API_KEY, }; } else if (process.env.ELASTIC_USERNAME && process.env.ELASTIC_PASSWORD) { config.auth = { username: process.env.ELASTIC_USERNAME, password: process.env.ELASTIC_PASSWORD, }; } const clientArgs: ElasticClientArgs = { client: new Client(config), indexName: process.env.ELASTIC_INDEX ?? "test_vectorstore", }; // Index documents const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "Elasticsearch is a powerful vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.", }), ]; const embeddings = new OpenAIEmbeddings(); // await ElasticVectorSearch.fromDocuments(docs, embeddings, clientArgs); const vectorStore = new ElasticVectorSearch(embeddings, clientArgs); // Also supports an additional {ids: []} parameter for upsertion const ids = await vectorStore.addDocuments(docs); /* Search the vector DB independently with meta filters */ const results = await vectorStore.similaritySearch("fox jump", 1); console.log(JSON.stringify(results, null, 2)); /* [ { "pageContent": "the quick brown fox jumped over the lazy dog", "metadata": { "foo": "bar" } } ] */ /* Use as part of a chain (currently no metadata filters) for LLM query */ const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, vectorStore, { k: 1, returnSourceDocuments: true, }); const response = await chain.invoke({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response, null, 2)); /* { "text": " Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.", "sourceDocuments": [ { "pageContent": "Elasticsearch a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.", "metadata": { "baz": "qux" } } ] } */ await vectorStore.delete({ ids }); const response2 = await chain.invoke({ query: "What is Elasticsearch?" }); console.log(JSON.stringify(response2, null, 2)); /* [] */}
#### API Reference:
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [VectorDBQAChain](https://v02.api.js.langchain.com/classes/langchain_chains.VectorDBQAChain.html) from `langchain/chains`
* [ElasticClientArgs](https://v02.api.js.langchain.com/interfaces/langchain_community_vectorstores_elasticsearch.ElasticClientArgs.html) from `@langchain/community/vectorstores/elasticsearch`
* [ElasticVectorSearch](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_elasticsearch.ElasticVectorSearch.html) from `@langchain/community/vectorstores/elasticsearch`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Couchbase
](/v0.2/docs/integrations/vectorstores/couchbase)[
Next
Faiss
](/v0.2/docs/integrations/vectorstores/faiss)
* [Setup](#setup)
* [Example: index docs, vector search and LLM integration](#example-index-docs-vector-search-and-llm-integration)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/chat/anthropic | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [Chat models](/v0.2/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi)
* [Anthropic](/v0.2/docs/integrations/chat/anthropic)
* [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools)
* [Azure OpenAI](/v0.2/docs/integrations/chat/azure)
* [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin)
* [Bedrock](/v0.2/docs/integrations/chat/bedrock)
* [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai)
* [Cohere](/v0.2/docs/integrations/chat/cohere)
* [Fake LLM](/v0.2/docs/integrations/chat/fake)
* [Fireworks](/v0.2/docs/integrations/chat/fireworks)
* [Friendli](/v0.2/docs/integrations/chat/friendli)
* [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai)
* [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm)
* [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai)
* [Groq](/v0.2/docs/integrations/chat/groq)
* [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp)
* [Minimax](/v0.2/docs/integrations/chat/minimax)
* [Mistral AI](/v0.2/docs/integrations/chat/mistral)
* [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor)
* [Ollama](/v0.2/docs/integrations/chat/ollama)
* [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions)
* [OpenAI](/v0.2/docs/integrations/chat/openai)
* [PremAI](/v0.2/docs/integrations/chat/premai)
* [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai)
* [TogetherAI](/v0.2/docs/integrations/chat/togetherai)
* [WebLLM](/v0.2/docs/integrations/chat/web_llm)
* [YandexGPT](/v0.2/docs/integrations/chat/yandex)
* [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* Anthropic
On this page
ChatAnthropic
=============
LangChain supports Anthropic's Claude family of chat models.
You'll first need to install the [`@langchain/anthropic`](https://www.npmjs.com/package/@langchain/anthropic) package:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
You'll also need to sign up and obtain an [Anthropic API key](https://www.anthropic.com/). Set it as an environment variable named `ANTHROPIC_API_KEY`, or pass it into the constructor as shown below.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
You can initialize an instance like this:
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ temperature: 0.9, model: "claude-3-sonnet-20240229", // In Node.js defaults to process.env.ANTHROPIC_API_KEY, // apiKey: "YOUR-API-KEY", maxTokens: 1024,});const res = await model.invoke("Why is the sky blue?");console.log(res);/* AIMessage { content: "The sky appears blue because of how air in Earth's atmosphere interacts with sunlight. As sunlight passes through the atmosphere, light waves get scattered by gas molecules and airborne particles. Blue light waves scatter more easily than other color light waves. Since blue light gets scattered across the sky, we perceive the sky as having a blue color.", name: undefined, additional_kwargs: { id: 'msg_01JuukTnjoXHuzQaPiSVvZQ1', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: { input_tokens: 15, output_tokens: 70 } } }*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Multimodal inputs[](#multimodal-inputs "Direct link to Multimodal inputs")
---------------------------------------------------------------------------
Claude-3 models support image multimodal inputs. The passed input must be a base64 encoded image with the filetype as a prefix (e.g. `data:image/png;base64,{YOUR_BASE64_ENCODED_DATA}`). Here's an example:
import * as fs from "node:fs/promises";import { ChatAnthropic } from "@langchain/anthropic";import { HumanMessage } from "@langchain/core/messages";const imageData = await fs.readFile("./hotdog.jpg");const chat = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const message = new HumanMessage({ content: [ { type: "text", text: "What's in this image?", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ],});const res = await chat.invoke([message]);console.log({ res });/* { res: AIMessage { content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage filling encased in a light brown bun or bread roll. The hot dog is cut lengthwise, revealing the bright red sausage interior contrasted against the lightly toasted bread exterior. This classic fast food item is depicted in detail against a plain white background.', name: undefined, additional_kwargs: { id: 'msg_0153boCaPL54QDEMQExkVur6', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: [Object] } } }*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [HumanMessage](https://v02.api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
See [the official docs](https://docs.anthropic.com/claude/docs/vision#what-image-file-types-does-claude-support) for a complete list of supported file types.
Agents[](#agents "Direct link to Agents")
------------------------------------------
Anthropic models that support tool calling can be used in the Tool Calling agent. Here's an example:
import { z } from "zod";import { ChatAnthropic } from "@langchain/anthropic";import { DynamicStructuredTool } from "@langchain/core/tools";import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});// Prompt template must have "input" and "agent_scratchpad input variables"const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const currentWeatherTool = new DynamicStructuredTool({ name: "get_current_weather", description: "Get the current weather in a given location", schema: z.object({ location: z.string().describe("The city and state, e.g. San Francisco, CA"), }), func: async () => Promise.resolve("28 °C"),});const agent = await createToolCallingAgent({ llm, tools: [currentWeatherTool], prompt,});const agentExecutor = new AgentExecutor({ agent, tools: [currentWeatherTool],});const input = "What's the weather like in SF?";const { output } = await agentExecutor.invoke({ input });console.log(output);/* The current weather in San Francisco, CA is 28°C.*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [DynamicStructuredTool](https://v02.api.js.langchain.com/classes/langchain_core_tools.DynamicStructuredTool.html) from `@langchain/core/tools`
* [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
tip
See the LangSmith trace [here](https://smith.langchain.com/public/e93ff7f6-03f7-4eb1-96c8-09a17dee1462/r)
Custom headers[](#custom-headers "Direct link to Custom headers")
------------------------------------------------------------------
You can pass custom headers in your requests like this:
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", maxTokens: 1024, clientOptions: { defaultHeaders: { "X-Api-Key": process.env.ANTHROPIC_API_KEY, }, },});const res = await model.invoke("Why is the sky blue?");console.log(res);/* AIMessage { content: "The sky appears blue because of the way sunlight interacts with the gases in Earth's atmosphere. Here's a more detailed explanation:\n" + '\n' + '- Sunlight is made up of different wavelengths of light, including the entire visible spectrum from red to violet.\n' + '\n' + '- As sunlight passes through the atmosphere, the gases (nitrogen, oxygen, etc.) cause the shorter wavelengths of light, in the blue and violet range, to be scattered more efficiently in different directions.\n' + '\n' + '- The blue wavelengths of about 475 nanometers get scattered more than the other visible wavelengths by the tiny gas molecules in the atmosphere.\n' + '\n' + '- This preferential scattering of blue light in all directions by the gas molecules is called Rayleigh scattering.\n' + '\n' + '- When we look at the sky, we see this scattered blue light from the sun coming at us from all parts of the sky.\n' + '\n' + "- At sunrise and sunset, the sun's rays have to travel further through the atmosphere before reaching our eyes, causing more of the blue light to be scattered out, leaving more of the red/orange wavelengths visible - which is why sunrises and sunsets appear reddish.\n" + '\n' + 'So in summary, the blueness of the sky is caused by this selective scattering of blue wavelengths of sunlight by the gases in the atmosphere.', name: undefined, additional_kwargs: { id: 'msg_01Mvvc5GvomqbUxP3YaeWXRe', type: 'message', role: 'assistant', model: 'claude-3-sonnet-20240229', stop_reason: 'end_turn', stop_sequence: null, usage: { input_tokens: 13, output_tokens: 284 } } }*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Tools[](#tools "Direct link to Tools")
---------------------------------------
The Anthropic API supports tool calling, along with multi-tool calling. The following examples demonstrate how to call tools:
### Single Tool[](#single-tool "Direct link to Single Tool")
import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});const tool = { name: "calculator", description: "A simple calculator tool", input_schema: zodToJsonSchema(calculatorSchema),};const model = new ChatAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: "claude-3-haiku-20240307",}).bind({ tools: [tool],});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(model);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(JSON.stringify(response, null, 2));/*{ "kwargs": { "content": "Okay, let's calculate that using the calculator tool:", "additional_kwargs": { "id": "msg_01YcT1KFV8qH7xG6T6C4EpGq", "role": "assistant", "model": "claude-3-haiku-20240307", "tool_calls": [ { "id": "toolu_01UiqGsTTH45MUveRQfzf7KH", "type": "function", "function": { "arguments": "{\"number1\":2,\"number2\":2,\"operation\":\"add\"}", "name": "calculator" } } ] }, "response_metadata": {} }}*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
tip
See the LangSmith trace [here](https://smith.langchain.com/public/90c03ed0-154b-4a50-afbf-83dcbf302647/r)
### Forced tool calling[](#forced-tool-calling "Direct link to Forced tool calling")
In this example we'll provide the model with two tools:
* `calculator`
* `get_weather`
Then, when we call `bindTools`, we'll force the model to use the `get_weather` tool by passing the `tool_choice` arg like this:
.bindTools({ tools, tool_choice: { type: "tool", name: "get_weather", }});
Finally, we'll invoke the model, but instead of asking about the weather, we'll ask it to do some math. Since we explicitly forced the model to use the `get_weather` tool, it will ignore the input and return the weather information (in this case it returned `<UNKNOWN>`, which is expected.)
import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});const weatherSchema = z.object({ city: z.string().describe("The city to get the weather from"), state: z.string().optional().describe("The state to get the weather from"),});const tools = [ { name: "calculator", description: "A simple calculator tool", input_schema: zodToJsonSchema(calculatorSchema), }, { name: "get_weather", description: "Get the weather of a specific location and return the temperature in Celsius.", input_schema: zodToJsonSchema(weatherSchema), },];const model = new ChatAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: "claude-3-haiku-20240307",}).bind({ tools, tool_choice: { type: "tool", name: "get_weather", },});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(model);const response = await chain.invoke({ input: "What is the sum of 2725 and 273639",});console.log(JSON.stringify(response, null, 2));/*{ "kwargs": { "tool_calls": [ { "name": "get_weather", "args": { "city": "<UNKNOWN>", "state": "<UNKNOWN>" }, "id": "toolu_01MGRNudJvSDrrCZcPa2WrBX" } ], "response_metadata": { "id": "msg_01RW3R4ctq7q5g4GJuGMmRPR", "model": "claude-3-haiku-20240307", "stop_sequence": null, "usage": { "input_tokens": 672, "output_tokens": 52 }, "stop_reason": "tool_use" } }}*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
The `bind_tools` argument has three possible values:
* `{ type: "tool", name: "tool_name" }` - Forces the model to use the specified tool.
* `"any"` - Allows the model to choose the tool, but still forcing it to choose at least one.
* `"auto"` - The default value. Allows the model to select any tool, or none.
tip
See the LangSmith trace [here](https://smith.langchain.com/public/c5cc8fe7-5e76-4607-8c43-1e0b30e4f5ca/r)
### `withStructuredOutput`[](#withstructuredoutput "Direct link to withstructuredoutput")
import { ChatAnthropic } from "@langchain/anthropic";import { ChatPromptTemplate } from "@langchain/core/prompts";import { z } from "zod";const calculatorSchema = z .object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."), }) .describe("A simple calculator tool");const model = new ChatAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY, model: "claude-3-haiku-20240307",});// Pass the schema and tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*//** * You can supply a "name" field to give the LLM additional context * around what you are trying to generate. You can also pass * 'includeRaw' to get the raw message back from the model too. */const includeRawModel = model.withStructuredOutput(calculatorSchema, { name: "calculator", includeRaw: true,});const includeRawChain = prompt.pipe(includeRawModel);const includeRawResponse = await includeRawChain.invoke({ input: "What is 2 + 2?",});console.log(JSON.stringify(includeRawResponse, null, 2));/*{ "raw": { "kwargs": { "content": "Okay, let me use the calculator tool to find the result of 2 + 2:", "additional_kwargs": { "id": "msg_01HYwRhJoeqwr5LkSCHHks5t", "type": "message", "role": "assistant", "model": "claude-3-haiku-20240307", "usage": { "input_tokens": 458, "output_tokens": 109 }, "tool_calls": [ { "id": "toolu_01LDJpdtEQrq6pXSqSgEHErC", "type": "function", "function": { "arguments": "{\"number1\":2,\"number2\":2,\"operation\":\"add\"}", "name": "calculator" } } ] }, } }, "parsed": { "operation": "add", "number1": 2, "number2": 2 }}*/
#### API Reference:
* [ChatAnthropic](https://v02.api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
tip
See the LangSmith trace [here](https://smith.langchain.com/public/efbd11c5-886e-4e07-be1a-951690fa8a27/r)
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Alibaba Tongyi
](/v0.2/docs/integrations/chat/alibaba_tongyi)[
Next
Anthropic Tools
](/v0.2/docs/integrations/chat/anthropic_tools)
* [Usage](#usage)
* [Multimodal inputs](#multimodal-inputs)
* [Agents](#agents)
* [Custom headers](#custom-headers)
* [Tools](#tools)
* [Single Tool](#single-tool)
* [Forced tool calling](#forced-tool-calling)
* [`withStructuredOutput`](#withstructuredoutput)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/momento_vector_index | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Momento Vector Index (MVI)
On this page
Momento Vector Index (MVI)
==========================
[MVI](https://gomomento.com): the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Whether in Node.js, browser, or edge, Momento has you covered.
To sign up and access MVI, visit the [Momento Console](https://console.gomomento.com).
Setup[](#setup "Direct link to Setup")
---------------------------------------
1. Sign up for an API key in the [Momento Console](https://console.gomomento.com/).
2. Install the SDK for your environment.
2.1. For **Node.js**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk
yarn add @gomomento/sdk
pnpm add @gomomento/sdk
2.2. For **browser or edge environments**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk-web
yarn add @gomomento/sdk-web
pnpm add @gomomento/sdk-web
3. Setup Env variables for Momento before running the code
3.1 OpenAI
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE
3.2 Momento
export MOMENTO_API_KEY=YOUR_MOMENTO_API_KEY_HERE # https://console.gomomento.com
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Index documents using `fromTexts` and search[](#index-documents-using-fromtexts-and-search "Direct link to index-documents-using-fromtexts-and-search")
This example demonstrates using the `fromTexts` method to instantiate the vector store and index documents. If the index does not exist, then it will be created. If the index already exists, then the documents will be added to the existing index.
The `ids` are optional; if you omit them, then Momento will generate UUIDs for you.
import { MomentoVectorIndex } from "@langchain/community/vectorstores/momento_vector_index";// For browser/edge, adjust this to import from "@gomomento/sdk-web";import { PreviewVectorIndexClient, VectorIndexConfigurations, CredentialProvider,} from "@gomomento/sdk";import { OpenAIEmbeddings } from "@langchain/openai";import { sleep } from "langchain/util/time";const vectorStore = await MomentoVectorIndex.fromTexts( ["hello world", "goodbye world", "salutations world", "farewell world"], {}, new OpenAIEmbeddings(), { client: new PreviewVectorIndexClient({ configuration: VectorIndexConfigurations.Laptop.latest(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), }), indexName: "langchain-example-index", }, { ids: ["1", "2", "3", "4"] });// because indexing is async, wait for it to finish to search directly afterawait sleep();const response = await vectorStore.similaritySearch("hello", 2);console.log(response);/*[ Document { pageContent: 'hello world', metadata: {} }, Document { pageContent: 'salutations world', metadata: {} }]*/
#### API Reference:
* [MomentoVectorIndex](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_momento_vector_index.MomentoVectorIndex.html) from `@langchain/community/vectorstores/momento_vector_index`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [sleep](https://v02.api.js.langchain.com/functions/langchain_util_time.sleep.html) from `langchain/util/time`
### Index documents using `fromDocuments` and search[](#index-documents-using-fromdocuments-and-search "Direct link to index-documents-using-fromdocuments-and-search")
Similar to the above, this example demonstrates using the `fromDocuments` method to instantiate the vector store and index documents. If the index does not exist, then it will be created. If the index already exists, then the documents will be added to the existing index.
Using `fromDocuments` allows you to seamlessly chain the various document loaders with indexing.
import { MomentoVectorIndex } from "@langchain/community/vectorstores/momento_vector_index";// For browser/edge, adjust this to import from "@gomomento/sdk-web";import { PreviewVectorIndexClient, VectorIndexConfigurations, CredentialProvider,} from "@gomomento/sdk";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { sleep } from "langchain/util/time";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();const vectorStore = await MomentoVectorIndex.fromDocuments( docs, new OpenAIEmbeddings(), { client: new PreviewVectorIndexClient({ configuration: VectorIndexConfigurations.Laptop.latest(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), }), indexName: "langchain-example-index", });// because indexing is async, wait for it to finish to search directly afterawait sleep();// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
#### API Reference:
* [MomentoVectorIndex](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_momento_vector_index.MomentoVectorIndex.html) from `@langchain/community/vectorstores/momento_vector_index`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [sleep](https://v02.api.js.langchain.com/functions/langchain_util_time.sleep.html) from `langchain/util/time`
### Search from an existing collection[](#search-from-an-existing-collection "Direct link to Search from an existing collection")
import { MomentoVectorIndex } from "@langchain/community/vectorstores/momento_vector_index";// For browser/edge, adjust this to import from "@gomomento/sdk-web";import { PreviewVectorIndexClient, VectorIndexConfigurations, CredentialProvider,} from "@gomomento/sdk";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = new MomentoVectorIndex(new OpenAIEmbeddings(), { client: new PreviewVectorIndexClient({ configuration: VectorIndexConfigurations.Laptop.latest(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), }), indexName: "langchain-example-index",});const response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
#### API Reference:
* [MomentoVectorIndex](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_momento_vector_index.MomentoVectorIndex.html) from `@langchain/community/vectorstores/momento_vector_index`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Milvus
](/v0.2/docs/integrations/vectorstores/milvus)[
Next
MongoDB Atlas
](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [Setup](#setup)
* [Usage](#usage)
* [Index documents using `fromTexts` and search](#index-documents-using-fromtexts-and-search)
* [Index documents using `fromDocuments` and search](#index-documents-using-fromdocuments-and-search)
* [Search from an existing collection](#search-from-an-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/vercel_postgres | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Vercel Postgres
On this page
Vercel Postgres
===============
LangChain.js supports using the [`@vercel/postgres`](https://www.npmjs.com/package/@vercel/postgres) package to use generic Postgres databases as vector stores, provided they support the [`pgvector`](https://github.com/pgvector/pgvector) Postgres extension.
This integration is particularly useful from web environments like Edge functions.
Setup[](#setup "Direct link to Setup")
---------------------------------------
To work with Vercel Postgres, you need to install the `@vercel/postgres` package:
* npm
* Yarn
* pnpm
npm install @vercel/postgres
yarn add @vercel/postgres
pnpm add @vercel/postgres
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
This integration automatically connects using the connection string set under `process.env.POSTGRES_URL`. You can also pass a connection string manually like this:
const vectorstore = await VercelPostgres.initialize(new OpenAIEmbeddings(), { postgresConnectionOptions: { connectionString: "postgres://<username>:<password>@<hostname>:<port>/<dbname>", },});
### Connecting to Vercel Postgres[](#connecting-to-vercel-postgres "Direct link to Connecting to Vercel Postgres")
A simple way to get started is to create a serverless [Vercel Postgres instance](https://vercel.com/docs/storage/vercel-postgres/quickstart). If you're deploying to a Vercel project with an associated Vercel Postgres instance, the required `POSTGRES_URL` environment variable will already be populated in hosted environments.
### Connecting to other databases[](#connecting-to-other-databases "Direct link to Connecting to other databases")
If you prefer to host your own Postgres instance, you can use a similar flow to LangChain's [PGVector](/v0.2/docs/integrations/vectorstores/pgvector) vectorstore integration and set the connection string either as an environment variable or as shown above.
Usage[](#usage "Direct link to Usage")
---------------------------------------
import { CohereEmbeddings } from "@langchain/cohere";import { VercelPostgres } from "@langchain/community/vectorstores/vercel_postgres";// Config is only required if you want to override default values.const config = { // tableName: "testvercelvectorstorelangchain", // postgresConnectionOptions: { // connectionString: "postgres://<username>:<password>@<hostname>:<port>/<dbname>", // }, // columns: { // idColumnName: "id", // vectorColumnName: "vector", // contentColumnName: "content", // metadataColumnName: "metadata", // },};const vercelPostgresStore = await VercelPostgres.initialize( new CohereEmbeddings(), config);const docHello = { pageContent: "hello", metadata: { topic: "nonsense" },};const docHi = { pageContent: "hi", metadata: { topic: "nonsense" } };const docMitochondria = { pageContent: "Mitochondria is the powerhouse of the cell", metadata: { topic: "science" },};const ids = await vercelPostgresStore.addDocuments([ docHello, docHi, docMitochondria,]);const results = await vercelPostgresStore.similaritySearch("hello", 2);console.log(results);/* [ Document { pageContent: 'hello', metadata: { topic: 'nonsense' } }, Document { pageContent: 'hi', metadata: { topic: 'nonsense' } } ]*/// Metadata filteringconst results2 = await vercelPostgresStore.similaritySearch( "Irrelevant query, metadata filtering", 2, { topic: "science", });console.log(results2);/* [ Document { pageContent: 'Mitochondria is the powerhouse of the cell', metadata: { topic: 'science' } } ]*/// Metadata filtering with IN-filters works as wellconst results3 = await vercelPostgresStore.similaritySearch( "Irrelevant query, metadata filtering", 3, { topic: { in: ["science", "nonsense"] }, });console.log(results3);/* [ Document { pageContent: 'hello', metadata: { topic: 'nonsense' } }, Document { pageContent: 'hi', metadata: { topic: 'nonsense' } }, Document { pageContent: 'Mitochondria is the powerhouse of the cell', metadata: { topic: 'science' } } ]*/// Upserting is supported as wellawait vercelPostgresStore.addDocuments( [ { pageContent: "ATP is the powerhouse of the cell", metadata: { topic: "science" }, }, ], { ids: [ids[2]] });const results4 = await vercelPostgresStore.similaritySearch( "What is the powerhouse of the cell?", 1);console.log(results4);/* [ Document { pageContent: 'ATP is the powerhouse of the cell', metadata: { topic: 'science' } } ]*/await vercelPostgresStore.delete({ ids: [ids[2]] });const results5 = await vercelPostgresStore.similaritySearch( "No more metadata", 2, { topic: "science", });console.log(results5);/* []*/// Remember to call .end() to close the connection!await vercelPostgresStore.end();
#### API Reference:
* [CohereEmbeddings](https://v02.api.js.langchain.com/classes/langchain_cohere.CohereEmbeddings.html) from `@langchain/cohere`
* [VercelPostgres](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_vercel_postgres.VercelPostgres.html) from `@langchain/community/vectorstores/vercel_postgres`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Vectara
](/v0.2/docs/integrations/vectorstores/vectara)[
Next
Voy
](/v0.2/docs/integrations/vectorstores/voy)
* [Setup](#setup)
* [Connecting to Vercel Postgres](#connecting-to-vercel-postgres)
* [Connecting to other databases](#connecting-to-other-databases)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/chat/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [Chat models](/v0.2/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi)
* [Anthropic](/v0.2/docs/integrations/chat/anthropic)
* [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools)
* [Azure OpenAI](/v0.2/docs/integrations/chat/azure)
* [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin)
* [Bedrock](/v0.2/docs/integrations/chat/bedrock)
* [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai)
* [Cohere](/v0.2/docs/integrations/chat/cohere)
* [Fake LLM](/v0.2/docs/integrations/chat/fake)
* [Fireworks](/v0.2/docs/integrations/chat/fireworks)
* [Friendli](/v0.2/docs/integrations/chat/friendli)
* [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai)
* [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm)
* [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai)
* [Groq](/v0.2/docs/integrations/chat/groq)
* [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp)
* [Minimax](/v0.2/docs/integrations/chat/minimax)
* [Mistral AI](/v0.2/docs/integrations/chat/mistral)
* [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor)
* [Ollama](/v0.2/docs/integrations/chat/ollama)
* [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions)
* [OpenAI](/v0.2/docs/integrations/chat/openai)
* [PremAI](/v0.2/docs/integrations/chat/premai)
* [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai)
* [TogetherAI](/v0.2/docs/integrations/chat/togetherai)
* [WebLLM](/v0.2/docs/integrations/chat/web_llm)
* [YandexGPT](/v0.2/docs/integrations/chat/yandex)
* [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Chat models
On this page
Chat models
===========
Features (natively supported)[](#features-natively-supported "Direct link to Features (natively supported)")
-------------------------------------------------------------------------------------------------------------
All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `invoke`, `batch`, `stream`. This gives all ChatModels basic support for invoking, streaming and batching, which by default is implemented as below:
* _Streaming_ support defaults to returning an `AsyncIterator` of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
* _Batch_ support defaults to calling the underlying ChatModel in parallel for each input. The concurrency can be controlled with the `maxConcurrency` key in `RunnableConfig`.
* _Map_ support defaults to calling `.invoke` across all instances of the array which it was called on.
Each ChatModel integration can optionally provide native implementations to truly enable invoke, streaming or batching requests.
Additionally, some chat models support additional ways of guaranteeing structure in their outputs by allowing you to pass in a defined schema. [Function calling and parallel function calling](/v0.2/docs/how_to/tool_calling) (tool calling) are two common ones, and those capabilities allow you to use the chat model as the LLM in certain types of agents. Some models in LangChain have also implemented a `withStructuredOutput()` method that unifies many of these different ways of constraining output to a schema.
The table shows, for each integration, which features have been implemented with native support. Yellow circles (🟡) indicates partial support - for example, if the model supports tool calling but not tool messages for agents.
Model
Invoke
Stream
Batch
Function Calling
Tool Calling
`withStructuredOutput()`
BedrockChat
✅
✅
✅
❌
❌
❌
ChatAlibabaTongyi
✅
❌
✅
❌
❌
❌
ChatAnthropic
✅
✅
✅
❌
✅
✅
ChatBaiduWenxin
✅
❌
✅
❌
❌
❌
ChatCloudflareWorkersAI
✅
✅
✅
❌
❌
❌
ChatCohere
✅
✅
✅
❌
❌
❌
ChatFireworks
✅
✅
✅
✅
✅
❌
ChatGoogleGenerativeAI
✅
✅
✅
❌
❌
❌
ChatGoogleVertexAI
✅
✅
✅
❌
❌
❌
ChatVertexAI
✅
✅
✅
❌
✅
✅
ChatGooglePaLM
✅
❌
✅
❌
❌
❌
ChatGroq
✅
✅
✅
❌
🟡
✅
ChatLlamaCpp
✅
✅
✅
❌
❌
❌
ChatMinimax
✅
❌
✅
✅
❌
❌
ChatMistralAI
✅
❌
✅
❌
✅
✅
ChatOllama
✅
✅
✅
❌
❌
❌
ChatOpenAI
✅
✅
✅
✅
✅
✅
ChatTogetherAI
✅
✅
✅
❌
❌
❌
ChatYandexGPT
✅
❌
✅
❌
❌
❌
ChatZhipuAI
✅
❌
✅
❌
❌
❌
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Components
](/v0.2/docs/integrations/components)[
Next
Chat models
](/v0.2/docs/integrations/chat/)
* [Features (natively supported)](#features-natively-supported)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/vectorstores/xata | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Memory](/v0.2/docs/integrations/vectorstores/memory)
* [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb)
* [Astra DB](/v0.2/docs/integrations/vectorstores/astradb)
* [Azure AI Search](/v0.2/docs/integrations/vectorstores/azure_aisearch)
* [Azure Cosmos DB](/v0.2/docs/integrations/vectorstores/azure_cosmosdb)
* [Cassandra](/v0.2/docs/integrations/vectorstores/cassandra)
* [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse)
* [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* [Cloudflare Vectorize](/v0.2/docs/integrations/vectorstores/cloudflare_vectorize)
* [Convex](/v0.2/docs/integrations/vectorstores/convex)
* [Couchbase](/v0.2/docs/integrations/vectorstores/couchbase)
* [Elasticsearch](/v0.2/docs/integrations/vectorstores/elasticsearch)
* [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* [Google Vertex AI Matching Engine](/v0.2/docs/integrations/vectorstores/googlevertexai)
* [SAP HANA Cloud Vector Engine](/v0.2/docs/integrations/vectorstores/hanavector)
* [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib)
* [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb)
* [Milvus](/v0.2/docs/integrations/vectorstores/milvus)
* [Momento Vector Index (MVI)](/v0.2/docs/integrations/vectorstores/momento_vector_index)
* [MongoDB Atlas](/v0.2/docs/integrations/vectorstores/mongodb_atlas)
* [MyScale](/v0.2/docs/integrations/vectorstores/myscale)
* [Neo4j Vector Index](/v0.2/docs/integrations/vectorstores/neo4jvector)
* [Neon Postgres](/v0.2/docs/integrations/vectorstores/neon)
* [OpenSearch](/v0.2/docs/integrations/vectorstores/opensearch)
* [PGVector](/v0.2/docs/integrations/vectorstores/pgvector)
* [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* [Prisma](/v0.2/docs/integrations/vectorstores/prisma)
* [Qdrant](/v0.2/docs/integrations/vectorstores/qdrant)
* [Redis](/v0.2/docs/integrations/vectorstores/redis)
* [Rockset](/v0.2/docs/integrations/vectorstores/rockset)
* [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore)
* [Supabase](/v0.2/docs/integrations/vectorstores/supabase)
* [Tigris](/v0.2/docs/integrations/vectorstores/tigris)
* [Turbopuffer](/v0.2/docs/integrations/vectorstores/turbopuffer)
* [TypeORM](/v0.2/docs/integrations/vectorstores/typeorm)
* [Typesense](/v0.2/docs/integrations/vectorstores/typesense)
* [Upstash Vector](/v0.2/docs/integrations/vectorstores/upstash)
* [USearch](/v0.2/docs/integrations/vectorstores/usearch)
* [Vectara](/v0.2/docs/integrations/vectorstores/vectara)
* [Vercel Postgres](/v0.2/docs/integrations/vectorstores/vercel_postgres)
* [Voy](/v0.2/docs/integrations/vectorstores/voy)
* [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate)
* [Xata](/v0.2/docs/integrations/vectorstores/xata)
* [Zep](/v0.2/docs/integrations/vectorstores/zep)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* Xata
On this page
Xata
====
[Xata](https://xata.io) is a serverless data platform, based on PostgreSQL. It provides a type-safe TypeScript/JavaScript SDK for interacting with your database, and a UI for managing your data.
Xata has a native vector type, which can be added to any table, and supports similarity search. LangChain inserts vectors directly to Xata, and queries it for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with Xata.
Setup[](#setup "Direct link to Setup")
---------------------------------------
### Install the Xata CLI[](#install-the-xata-cli "Direct link to Install the Xata CLI")
npm install @xata.io/cli -g
### Create a database to be used as a vector store[](#create-a-database-to-be-used-as-a-vector-store "Direct link to Create a database to be used as a vector store")
In the [Xata UI](https://app.xata.io) create a new database. You can name it whatever you want, but for this example we'll use `langchain`. Create a table, again you can name it anything, but we will use `vectors`. Add the following columns via the UI:
* `content` of type "Text". This is used to store the `Document.pageContent` values.
* `embedding` of type "Vector". Use the dimension used by the model you plan to use (1536 for OpenAI).
* any other columns you want to use as metadata. They are populated from the `Document.metadata` object. For example, if in the `Document.metadata` object you have a `title` property, you can create a `title` column in the table and it will be populated.
### Initialize the project[](#initialize-the-project "Direct link to Initialize the project")
In your project, run:
xata init
and then choose the database you created above. This will also generate a `xata.ts` or `xata.js` file that defines the client you can use to interact with the database. See the [Xata getting started docs](https://xata.io/docs/getting-started/installation) for more details on using the Xata JavaScript/TypeScript SDK.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Example: Q&A chatbot using OpenAI and Xata as vector store[](#example-qa-chatbot-using-openai-and-xata-as-vector-store "Direct link to Example: Q&A chatbot using OpenAI and Xata as vector store")
This example uses the `VectorDBQAChain` to search the documents stored in Xata and then pass them as context to the OpenAI model, in order to answer the question asked by the user.
import { XataVectorSearch } from "@langchain/community/vectorstores/xata";import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";import { BaseClient } from "@xata.io/client";import { VectorDBQAChain } from "langchain/chains";import { Document } from "@langchain/core/documents";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/xata// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata is a Serverless Data platform based on PostgreSQL", }), new Document({ pageContent: "Xata offers a built-in vector type that can be used to store and query vectors", }), new Document({ pageContent: "Xata includes similarity search", }), ]; const ids = await store.addDocuments(docs); // eslint-disable-next-line no-promise-executor-return await new Promise((r) => setTimeout(r, 2000)); const model = new OpenAI(); const chain = VectorDBQAChain.fromLLM(model, store, { k: 1, returnSourceDocuments: true, }); const response = await chain.invoke({ query: "What is Xata?" }); console.log(JSON.stringify(response, null, 2)); await store.delete({ ids });}
#### API Reference:
* [XataVectorSearch](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_xata.XataVectorSearch.html) from `@langchain/community/vectorstores/xata`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [VectorDBQAChain](https://v02.api.js.langchain.com/classes/langchain_chains.VectorDBQAChain.html) from `langchain/chains`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
### Example: Similarity search with a metadata filter[](#example-similarity-search-with-a-metadata-filter "Direct link to Example: Similarity search with a metadata filter")
This example shows how to implement semantic search using LangChain.js and Xata. Before running it, make sure to add an `author` column of type String to the `vectors` table in Xata.
import { XataVectorSearch } from "@langchain/community/vectorstores/xata";import { OpenAIEmbeddings } from "@langchain/openai";import { BaseClient } from "@xata.io/client";import { Document } from "@langchain/core/documents";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/xata// Also, add a column named "author" to the "vectors" table.// if you use the generated client, you don't need this function.// Just import getXataClient from the generated xata.ts instead.const getXataClient = () => { if (!process.env.XATA_API_KEY) { throw new Error("XATA_API_KEY not set"); } if (!process.env.XATA_DB_URL) { throw new Error("XATA_DB_URL not set"); } const xata = new BaseClient({ databaseURL: process.env.XATA_DB_URL, apiKey: process.env.XATA_API_KEY, branch: process.env.XATA_BRANCH || "main", }); return xata;};export async function run() { const client = getXataClient(); const table = "vectors"; const embeddings = new OpenAIEmbeddings(); const store = new XataVectorSearch(embeddings, { client, table }); // Add documents const docs = [ new Document({ pageContent: "Xata works great with Langchain.js", metadata: { author: "Xata" }, }), new Document({ pageContent: "Xata works great with Langchain", metadata: { author: "Langchain" }, }), new Document({ pageContent: "Xata includes similarity search", metadata: { author: "Xata" }, }), ]; const ids = await store.addDocuments(docs); // eslint-disable-next-line no-promise-executor-return await new Promise((r) => setTimeout(r, 2000)); // author is applied as pre-filter to the similarity search const results = await store.similaritySearchWithScore("xata works great", 6, { author: "Langchain", }); console.log(JSON.stringify(results, null, 2)); await store.delete({ ids });}
#### API Reference:
* [XataVectorSearch](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_xata.XataVectorSearch.html) from `@langchain/community/vectorstores/xata`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Weaviate
](/v0.2/docs/integrations/vectorstores/weaviate)[
Next
Zep
](/v0.2/docs/integrations/vectorstores/zep)
* [Setup](#setup)
* [Install the Xata CLI](#install-the-xata-cli)
* [Create a database to be used as a vector store](#create-a-database-to-be-used-as-a-vector-store)
* [Initialize the project](#initialize-the-project)
* [Usage](#usage)
* [Example: Q&A chatbot using OpenAI and Xata as vector store](#example-qa-chatbot-using-openai-and-xata-as-vector-store)
* [Example: Similarity search with a metadata filter](#example-similarity-search-with-a-metadata-filter)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/platforms/google/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Providers](/v0.2/docs/integrations/platforms/)
* Google
On this page
Google
======
Functionality related to [Google Cloud Platform](https://cloud.google.com/)
Chat models[](#chat-models "Direct link to Chat models")
---------------------------------------------------------
### Gemini Models[](#gemini-models "Direct link to Gemini Models")
Access Gemini models such as `gemini-pro` and `gemini-pro-vision` through the [`ChatGoogleGenerativeAI`](/v0.2/docs/integrations/chat/google_generativeai), or if using VertexAI, via the [`ChatVertexAI`](/v0.2/docs/integrations/chat/google_vertex_ai) class.
* GenAI
* VertexAI
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/google-genai
yarn add @langchain/google-genai
pnpm add @langchain/google-genai
Configure your API key.
export GOOGLE_API_KEY=your-api-key
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";const model = new ChatGoogleGenerativeAI({ model: "gemini-pro", maxOutputTokens: 2048,});// Batch and stream are also supportedconst res = await model.invoke([ [ "human", "What would be a good company name for a company that makes colorful socks?", ],]);
Gemini vision models support image inputs when providing a single human message. For example:
const visionModel = new ChatGoogleGenerativeAI({ model: "gemini-pro-vision", maxOutputTokens: 2048,});const image = fs.readFileSync("./hotdog.jpg").toString("base64");const input2 = [ new HumanMessage({ content: [ { type: "text", text: "Describe the following image.", }, { type: "image_url", image_url: `data:image/png;base64,${image}`, }, ], }),];const res = await visionModel.invoke(input2);
tip
Click [here](/v0.2/docs/integrations/chat/google_generativeai) for the `@langchain/google-genai` specific integration docs
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Then, you'll need to add your service account credentials, either directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}
or as a file path:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS_FILE=/path/to/your/credentials.json
import { ChatVertexAI } from "@langchain/google-vertexai";// Or, if using the web entrypoint:// import { ChatVertexAI } from "@langchain/google-vertexai-web";const model = new ChatVertexAI({ model: "gemini-1.0-pro", maxOutputTokens: 2048,});// Batch and stream are also supportedconst res = await model.invoke([ [ "human", "What would be a good company name for a company that makes colorful socks?", ],]);
Gemini vision models support image inputs when providing a single human message. For example:
const visionModel = new ChatVertexAI({ model: "gemini-pro-vision", maxOutputTokens: 2048,});const image = fs.readFileSync("./hotdog.png").toString("base64");const input2 = [ new HumanMessage({ content: [ { type: "text", text: "Describe the following image.", }, { type: "image_url", image_url: `data:image/png;base64,${image}`, }, ], }),];const res = await visionModel.invoke(input2);
tip
Click [here](/v0.2/docs/integrations/chat/google_vertex_ai) for the `@langchain/google-vertexai` specific integration docs
The value of `image_url` must be a base64 encoded image (e.g., `data:image/png;base64,abcd124`).
### Vertex AI (Legacy)[](#vertex-ai-legacy "Direct link to Vertex AI (Legacy)")
tip
See the legacy Google PaLM and VertexAI documentation [here](/v0.2/docs/integrations/chat/google_palm) for chat, and [here](/v0.2/docs/integrations/llms/google_palm) for LLMs.
Vector Store[](#vector-store "Direct link to Vector Store")
------------------------------------------------------------
### Vertex AI Vector Search[](#vertex-ai-vector-search "Direct link to Vertex AI Vector Search")
> [Vertex AI Vector Search](https://cloud.google.com/vertex-ai/docs/matching-engine/overview), formerly known as Vertex AI Matching Engine, provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.
import { MatchingEngine } from "langchain/vectorstores/googlevertexai";
Tools[](#tools "Direct link to Tools")
---------------------------------------
### Google Search[](#google-search "Direct link to Google Search")
* Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)
* Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively
There exists a `GoogleCustomSearch` utility which wraps this API. To import this utility:
import { GoogleCustomSearch } from "langchain/tools";
We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:
const tools = [new GoogleCustomSearch({})];// Pass this variable into your agent.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
AWS
](/v0.2/docs/integrations/platforms/aws)[
Next
Microsoft
](/v0.2/docs/integrations/platforms/microsoft)
* [Chat models](#chat-models)
* [Gemini Models](#gemini-models)
* [Vertex AI (Legacy)](#vertex-ai-legacy)
* [Vector Store](#vector-store)
* [Vertex AI Vector Search](#vertex-ai-vector-search)
* [Tools](#tools)
* [Google Search](#google-search)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/text_embedding | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Alibaba Tongyi](/v0.2/docs/integrations/text_embedding/alibaba_tongyi)
* [Azure OpenAI](/v0.2/docs/integrations/text_embedding/azure_openai)
* [Baidu Qianfan](/v0.2/docs/integrations/text_embedding/baidu_qianfan)
* [Bedrock](/v0.2/docs/integrations/text_embedding/bedrock)
* [Cloudflare Workers AI](/v0.2/docs/integrations/text_embedding/cloudflare_ai)
* [Cohere](/v0.2/docs/integrations/text_embedding/cohere)
* [Fireworks](/v0.2/docs/integrations/text_embedding/fireworks)
* [Google AI](/v0.2/docs/integrations/text_embedding/google_generativeai)
* [Google PaLM](/v0.2/docs/integrations/text_embedding/google_palm)
* [Google Vertex AI](/v0.2/docs/integrations/text_embedding/google_vertex_ai)
* [Gradient AI](/v0.2/docs/integrations/text_embedding/gradient_ai)
* [HuggingFace Inference](/v0.2/docs/integrations/text_embedding/hugging_face_inference)
* [Llama CPP](/v0.2/docs/integrations/text_embedding/llama_cpp)
* [Minimax](/v0.2/docs/integrations/text_embedding/minimax)
* [Mistral AI](/v0.2/docs/integrations/text_embedding/mistralai)
* [Nomic](/v0.2/docs/integrations/text_embedding/nomic)
* [Ollama](/v0.2/docs/integrations/text_embedding/ollama)
* [OpenAI](/v0.2/docs/integrations/text_embedding/openai)
* [Prem AI](/v0.2/docs/integrations/text_embedding/premai)
* [TensorFlow](/v0.2/docs/integrations/text_embedding/tensorflow)
* [Together AI](/v0.2/docs/integrations/text_embedding/togetherai)
* [HuggingFace Transformers](/v0.2/docs/integrations/text_embedding/transformers)
* [Voyage AI](/v0.2/docs/integrations/text_embedding/voyageai)
* [ZhipuAI](/v0.2/docs/integrations/text_embedding/zhipuai)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Embedding models
Embedding models
================
[
📄️ Alibaba Tongyi
------------------
The AlibabaTongyiEmbeddings class uses the Alibaba Tongyi API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/alibaba_tongyi)
[
📄️ Azure OpenAI
----------------
Azure OpenAI is a cloud service to help you quickly develop generative AI experiences with a diverse set of prebuilt and curated models from OpenAI, Meta and beyond.
](/v0.2/docs/integrations/text_embedding/azure_openai)
[
📄️ Baidu Qianfan
-----------------
The BaiduQianfanEmbeddings class uses the Baidu Qianfan API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/baidu_qianfan)
[
📄️ Bedrock
-----------
Amazon Bedrock is a fully managed service that makes base models from Amazon and third-party model providers accessible through an API.
](/v0.2/docs/integrations/text_embedding/bedrock)
[
📄️ Cloudflare Workers AI
-------------------------
If you're deploying your project in a Cloudflare worker, you can use Cloudflare's built-in Workers AI embeddings with LangChain.js.
](/v0.2/docs/integrations/text_embedding/cloudflare_ai)
[
📄️ Cohere
----------
The CohereEmbeddings class uses the Cohere API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/cohere)
[
📄️ Fireworks
-------------
The FireworksEmbeddings class allows you to use the Fireworks AI API to generate embeddings.
](/v0.2/docs/integrations/text_embedding/fireworks)
[
📄️ Google AI
-------------
You can access Google's generative AI embeddings models through
](/v0.2/docs/integrations/text_embedding/google_generativeai)
[
📄️ Google PaLM
---------------
This integration does not support embeddings-\* model. Check Google AI embeddings.
](/v0.2/docs/integrations/text_embedding/google_palm)
[
📄️ Google Vertex AI
--------------------
The GoogleVertexAIEmbeddings class uses Google's Vertex AI PaLM models
](/v0.2/docs/integrations/text_embedding/google_vertex_ai)
[
📄️ Gradient AI
---------------
The GradientEmbeddings class uses the Gradient AI API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/gradient_ai)
[
📄️ HuggingFace Inference
-------------------------
This Embeddings integration uses the HuggingFace Inference API to generate embeddings for a given text using by default the sentence-transformers/distilbert-base-nli-mean-tokens model. You can pass a different model name to the constructor to use a different model.
](/v0.2/docs/integrations/text_embedding/hugging_face_inference)
[
📄️ Llama CPP
-------------
Only available on Node.js.
](/v0.2/docs/integrations/text_embedding/llama_cpp)
[
📄️ Minimax
-----------
The MinimaxEmbeddings class uses the Minimax API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/minimax)
[
📄️ Mistral AI
--------------
The MistralAIEmbeddings class uses the Mistral AI API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/mistralai)
[
📄️ Nomic
---------
The NomicEmbeddings class uses the Nomic AI API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/nomic)
[
📄️ Ollama
----------
The OllamaEmbeddings class uses the /api/embeddings route of a locally hosted Ollama server to generate embeddings for given texts.
](/v0.2/docs/integrations/text_embedding/ollama)
[
📄️ OpenAI
----------
The OpenAIEmbeddings class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing stripNewLines: false to the constructor.
](/v0.2/docs/integrations/text_embedding/openai)
[
📄️ Prem AI
-----------
The PremEmbeddings class uses the Prem AI API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/premai)
[
📄️ TensorFlow
--------------
This Embeddings integration runs the embeddings entirely in your browser or Node.js environment, using TensorFlow.js. This means that your data isn't sent to any third party, and you don't need to sign up for any API keys. However, it does require more memory and processing power than the other integrations.
](/v0.2/docs/integrations/text_embedding/tensorflow)
[
📄️ Together AI
---------------
The TogetherAIEmbeddings class uses the Together AI API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/togetherai)
[
📄️ HuggingFace Transformers
----------------------------
The TransformerEmbeddings class uses the Transformers.js package to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/transformers)
[
📄️ Voyage AI
-------------
The VoyageEmbeddings class uses the Voyage AI REST API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/voyageai)
[
📄️ ZhipuAI
-----------
The ZhipuAIEmbeddings class uses the ZhipuAI API to generate embeddings for a given text.
](/v0.2/docs/integrations/text_embedding/zhipuai)
[
Previous
YandexGPT
](/v0.2/docs/integrations/llms/yandex)[
Next
Alibaba Tongyi
](/v0.2/docs/integrations/text_embedding/alibaba_tongyi)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/chat/mistral/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [Chat models](/v0.2/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.2/docs/integrations/chat/alibaba_tongyi)
* [Anthropic](/v0.2/docs/integrations/chat/anthropic)
* [Anthropic Tools](/v0.2/docs/integrations/chat/anthropic_tools)
* [Azure OpenAI](/v0.2/docs/integrations/chat/azure)
* [Baidu Wenxin](/v0.2/docs/integrations/chat/baidu_wenxin)
* [Bedrock](/v0.2/docs/integrations/chat/bedrock)
* [Cloudflare Workers AI](/v0.2/docs/integrations/chat/cloudflare_workersai)
* [Cohere](/v0.2/docs/integrations/chat/cohere)
* [Fake LLM](/v0.2/docs/integrations/chat/fake)
* [Fireworks](/v0.2/docs/integrations/chat/fireworks)
* [Friendli](/v0.2/docs/integrations/chat/friendli)
* [Google GenAI](/v0.2/docs/integrations/chat/google_generativeai)
* [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/chat/google_palm)
* [Google Vertex AI](/v0.2/docs/integrations/chat/google_vertex_ai)
* [Groq](/v0.2/docs/integrations/chat/groq)
* [Llama CPP](/v0.2/docs/integrations/chat/llama_cpp)
* [Minimax](/v0.2/docs/integrations/chat/minimax)
* [Mistral AI](/v0.2/docs/integrations/chat/mistral)
* [NIBittensorChatModel](/v0.2/docs/integrations/chat/ni_bittensor)
* [Ollama](/v0.2/docs/integrations/chat/ollama)
* [Ollama Functions](/v0.2/docs/integrations/chat/ollama_functions)
* [OpenAI](/v0.2/docs/integrations/chat/openai)
* [PremAI](/v0.2/docs/integrations/chat/premai)
* [PromptLayer OpenAI](/v0.2/docs/integrations/chat/prompt_layer_openai)
* [TogetherAI](/v0.2/docs/integrations/chat/togetherai)
* [WebLLM](/v0.2/docs/integrations/chat/web_llm)
* [YandexGPT](/v0.2/docs/integrations/chat/yandex)
* [ZhipuAI](/v0.2/docs/integrations/chat/zhipuai)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* Mistral AI
On this page
ChatMistralAI
=============
[Mistral AI](https://mistral.ai/) is a research organization and hosting platform for LLMs. They're most known for their family of 7B models ([`mistral7b` // `mistral-tiny`](https://mistral.ai/news/announcing-mistral-7b/), [`mixtral8x7b` // `mistral-small`](https://mistral.ai/news/mixtral-of-experts/)).
The LangChain implementation of Mistral's models uses their hosted generation API, making it easier to access their models without needing to run them locally.
Models[](#models "Direct link to Models")
------------------------------------------
Mistral's API offers access to two of their open source, and proprietary models:
* `open-mistral-7b` (aka `mistral-tiny-2312`)
* `open-mixtral-8x7b` (aka `mistral-small-2312`)
* `mistral-small-latest` (aka `mistral-small-2402`) (default)
* `mistral-medium-latest` (aka `mistral-medium-2312`)
* `mistral-large-latest` (aka `mistral-large-2402`)
See [this page](https://docs.mistral.ai/guides/model-selection/) for an up to date list.
Setup[](#setup "Direct link to Setup")
---------------------------------------
In order to use the Mistral API you'll need an API key. You can sign up for a Mistral account and create an API key [here](https://console.mistral.ai/).
You'll first need to install the [`@langchain/mistralai`](https://www.npmjs.com/package/@langchain/mistralai) package:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
Usage[](#usage "Direct link to Usage")
---------------------------------------
When sending chat messages to mistral, there are a few requirements to follow:
* The first message can __not__ be an assistant (ai) message.
* Messages __must__ alternate between user and assistant (ai) messages.
* Messages can __not__ end with an assistant (ai) or system message.
import { ChatMistralAI } from "@langchain/mistralai";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new ChatMistralAI({ apiKey: process.env.MISTRAL_API_KEY, model: "mistral-small",});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["human", "{input}"],]);const chain = prompt.pipe(model);const response = await chain.invoke({ input: "Hello",});console.log("response", response);/**response AIMessage { lc_namespace: [ 'langchain_core', 'messages' ], content: "Hello! I'm here to help answer any questions you might have or provide information on a variety of topics. How can I assist you today?\n" + '\n' + 'Here are some common tasks I can help with:\n' + '\n' + '* Setting alarms or reminders\n' + '* Sending emails or messages\n' + '* Making phone calls\n' + '* Providing weather information\n' + '* Creating to-do lists\n' + '* Offering suggestions for restaurants, movies, or other local activities\n' + '* Providing definitions and explanations for words or concepts\n' + '* Translating text into different languages\n' + '* Playing music or podcasts\n' + '* Setting timers\n' + '* Providing directions or traffic information\n' + '* And much more!\n' + '\n' + "Let me know how I can help you specifically, and I'll do my best to make your day easier and more productive!\n" + '\n' + 'Best regards,\n' + 'Your helpful assistant.', name: undefined, additional_kwargs: {}} */
#### API Reference:
* [ChatMistralAI](https://v02.api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) from `@langchain/mistralai`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/d69d0db9-f29e-45aa-a40d-b53f6273d7d0/r)
### Streaming[](#streaming "Direct link to Streaming")
Mistral's API also supports streaming token responses. The example below demonstrates how to use this feature.
import { ChatMistralAI } from "@langchain/mistralai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatMistralAI({ apiKey: process.env.MISTRAL_API_KEY, model: "mistral-small",});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["human", "{input}"],]);const outputParser = new StringOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const response = await chain.stream({ input: "Hello",});for await (const item of response) { console.log("stream item:", item);}/**stream item:stream item: Hello! I'm here to help answer any questions youstream item: might have or assist you with any task you'd like tostream item: accomplish. I can provide informationstream item: on a wide range of topicsstream item: , from math and science to history and literature. I canstream item: also help you manage your schedule, set reminders, andstream item: much more. Is there something specific you need help with? Letstream item: me know!stream item: */
#### API Reference:
* [ChatMistralAI](https://v02.api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) from `@langchain/mistralai`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/061d90f2-ac7e-44c5-8790-8b23299f9217/r)
### Tool calling[](#tool-calling "Direct link to Tool calling")
Mistral's API now supports tool calling and JSON mode! The examples below demonstrates how to use them, along with how to use the `withStructuredOutput` method to easily compose structured output LLM calls.
import { ChatMistralAI } from "@langchain/mistralai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";import { z } from "zod";import { StructuredTool } from "@langchain/core/tools";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});// Extend the StructuredTool class to create a new toolclass CalculatorTool extends StructuredTool { name = "calculator"; description = "A simple calculator tool"; schema = calculatorSchema; async _call(input: z.infer<typeof calculatorSchema>) { return JSON.stringify(input); }}// Or you can convert the tool to a JSON schema using// a library like zod-to-json-schema// Uncomment the lines below to use tools this way.// import { zodToJsonSchema } from "zod-to-json-schema";// const calculatorJsonSchema = zodToJsonSchema(calculatorSchema);const model = new ChatMistralAI({ apiKey: process.env.MISTRAL_API_KEY, model: "mistral-large",});// Bind the tool to the modelconst modelWithTool = model.bind({ tools: [new CalculatorTool()],});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Define an output parser that can handle tool responsesconst outputParser = new JsonOutputKeyToolsParser({ keyName: "calculator", returnSingle: true,});// Chain your prompt, model, and output parser togetherconst chain = prompt.pipe(modelWithTool).pipe(outputParser);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/*{ operation: 'add', number1: 2, number2: 2 } */
#### API Reference:
* [ChatMistralAI](https://v02.api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) from `@langchain/mistralai`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [JsonOutputKeyToolsParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers_openai_tools.JsonOutputKeyToolsParser.html) from `@langchain/core/output_parsers/openai_tools`
* [StructuredTool](https://v02.api.js.langchain.com/classes/langchain_core_tools.StructuredTool.html) from `@langchain/core/tools`
### `.withStructuredOutput({ ... })`[](#withstructuredoutput-- "Direct link to withstructuredoutput--")
info
The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change.
Using the `.withStructuredOutput` method, you can easily make the LLM return structured output, given only a Zod or JSON schema:
note
The Mistral tool calling API requires descriptions for each tool field. If descriptions are not supplied, the API will error.
import { ChatMistralAI } from "@langchain/mistralai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { z } from "zod";const calculatorSchema = z .object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute."), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."), }) .describe("A simple calculator tool");const model = new ChatMistralAI({ apiKey: process.env.MISTRAL_API_KEY, model: "mistral-large",});// Pass the schema and tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*//** * You can supply a "name" field to give the LLM additional context * around what you are trying to generate. You can also pass * 'includeRaw' to get the raw message back from the model too. */const includeRawModel = model.withStructuredOutput(calculatorSchema, { name: "calculator", includeRaw: true,});const includeRawChain = prompt.pipe(includeRawModel);const includeRawResponse = await includeRawChain.invoke({ input: "What is 2 + 2?",});console.log(JSON.stringify(includeRawResponse, null, 2));/* { "raw": { "kwargs": { "content": "", "additional_kwargs": { "tool_calls": [ { "id": "null", "type": "function", "function": { "name": "calculator", "arguments": "{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}" } } ] } } }, "parsed": { "operation": "add", "number1": 2, "number2": 2 } }*/
#### API Reference:
* [ChatMistralAI](https://v02.api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) from `@langchain/mistralai`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
### Using JSON schema:[](#using-json-schema "Direct link to Using JSON schema:")
import { ChatMistralAI } from "@langchain/mistralai";import { ChatPromptTemplate } from "@langchain/core/prompts";const calculatorJsonSchema = { type: "object", properties: { operation: { type: "string", enum: ["add", "subtract", "multiply", "divide"], description: "The type of operation to execute.", }, number1: { type: "number", description: "The first number to operate on." }, number2: { type: "number", description: "The second number to operate on.", }, }, required: ["operation", "number1", "number2"], description: "A simple calculator tool",};const model = new ChatMistralAI({ apiKey: process.env.MISTRAL_API_KEY, model: "mistral-large",});// Pass the schema and tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorJsonSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*/
#### API Reference:
* [ChatMistralAI](https://v02.api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) from `@langchain/mistralai`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
### Tool calling agent[](#tool-calling-agent "Direct link to Tool calling agent")
The larger Mistral models not only support tool calling, but can also be used in the Tool Calling agent. Here's an example:
import { z } from "zod";import { ChatMistralAI } from "@langchain/mistralai";import { DynamicStructuredTool } from "@langchain/core/tools";import { AgentExecutor, createToolCallingAgent } from "langchain/agents";import { ChatPromptTemplate } from "@langchain/core/prompts";const llm = new ChatMistralAI({ temperature: 0, model: "mistral-large-latest",});// Prompt template must have "input" and "agent_scratchpad input variables"const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{chat_history}"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"],]);const currentWeatherTool = new DynamicStructuredTool({ name: "get_current_weather", description: "Get the current weather in a given location", schema: z.object({ location: z.string().describe("The city and state, e.g. San Francisco, CA"), }), func: async () => Promise.resolve("28 °C"),});const agent = await createToolCallingAgent({ llm, tools: [currentWeatherTool], prompt,});const agentExecutor = new AgentExecutor({ agent, tools: [currentWeatherTool],});const input = "What's the weather like in Paris?";const { output } = await agentExecutor.invoke({ input });console.log(output);/* The current weather in Paris is 28 °C.*/
#### API Reference:
* [ChatMistralAI](https://v02.api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) from `@langchain/mistralai`
* [DynamicStructuredTool](https://v02.api.js.langchain.com/classes/langchain_core_tools.DynamicStructuredTool.html) from `@langchain/core/tools`
* [AgentExecutor](https://v02.api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createToolCallingAgent](https://v02.api.js.langchain.com/functions/langchain_agents.createToolCallingAgent.html) from `langchain/agents`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Minimax
](/v0.2/docs/integrations/chat/minimax)[
Next
NIBittensorChatModel
](/v0.2/docs/integrations/chat/ni_bittensor)
* [Models](#models)
* [Setup](#setup)
* [Usage](#usage)
* [Streaming](#streaming)
* [Tool calling](#tool-calling)
* [`.withStructuredOutput({ ... })`](#withstructuredoutput--)
* [Using JSON schema:](#using-json-schema)
* [Tool calling agent](#tool-calling-agent)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/ecosystem/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* [Integrating with LangServe](/v0.1/docs/ecosystem/langserve/)
* [LangSmith](https://docs.smith.langchain.com)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* Ecosystem
Ecosystem
=========
[
🗃️ Integrations
----------------
5 items
](/v0.1/docs/ecosystem/integrations/)
[
📄️ Integrating with LangServe
------------------------------
LangServe is a Python framework that helps developers deploy LangChain runnables and chains
](/v0.1/docs/ecosystem/langserve/)
[
🔗 LangSmith
------------
](https://docs.smith.langchain.com)
[
Previous
Migrating to 0.1
](/v0.1/docs/guides/migrating/)[
Next
Integrations
](/v0.1/docs/ecosystem/integrations/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Get started](/v0.1/docs/get_started/)
* Introduction
On this page
Introduction
============
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
* **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
* **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
This framework consists of several parts.
* **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
* **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. (_Python only_)
* **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API. (_Python only_)
* **[LangSmith](https://smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
![LangChain Diagram](/v0.1/assets/images/langchain_stack_feb_2024-101939844004a99c1b676723fc0ee5e9.webp)
Together, these products simplify the entire application lifecycle:
* **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
* **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
* **Deploy**: Turn any chain into an API with LangServe.
LangChain Libraries[](#langchain-libraries "Direct link to LangChain Libraries")
---------------------------------------------------------------------------------
The main value props of the LangChain packages are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
[Here's](/v0.1/docs/get_started/installation/) how to install LangChain, set up your environment, and start building.
We recommend following our [Quickstart](/v0.1/docs/get_started/quickstart/) guide to familiarize yourself with the framework by building your first LangChain application.
Read up on our [Security](/v0.1/docs/security/) best practices to make sure you're developing safely with LangChain.
note
These docs focus on the JS/TS LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library.
LangChain Expression Language (LCEL)[](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
----------------------------------------------------------------------------------------------------------------------------------
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
* **[Overview](/v0.1/docs/expression_language/)**: LCEL and its benefits
* **[Interface](/v0.1/docs/expression_language/interface/)**: The standard interface for LCEL objects
* **[How-to](/v0.1/docs/expression_language/how_to/routing/)**: Key features of LCEL
* **[Cookbook](/v0.1/docs/expression_language/cookbook/)**: Example code for accomplishing common tasks
Modules[](#modules "Direct link to Modules")
---------------------------------------------
LangChain provides standard, extendable interfaces and integrations for the following modules:
#### [Model I/O](/v0.1/docs/modules/model_io/)[](#model-io "Direct link to model-io")
Interface with language models
#### [Retrieval](/v0.1/docs/modules/data_connection/)[](#retrieval "Direct link to retrieval")
Interface with application-specific data
#### [Agents](/v0.1/docs/modules/agents/)[](#agents "Direct link to agents")
Let models choose which tools to use given high-level directives
Examples, ecosystem, and resources[](#examples-ecosystem-and-resources "Direct link to Examples, ecosystem, and resources")
----------------------------------------------------------------------------------------------------------------------------
### [Use cases](/v0.1/docs/use_cases/)[](#use-cases "Direct link to use-cases")
Walkthroughs and techniques for common end-to-end use cases, like:
* [Document question answering](/v0.1/docs/use_cases/question_answering/)
* [RAG](/v0.1/docs/use_cases/question_answering/)
* [Agents](/v0.1/docs/use_cases/autonomous_agents/)
* and much more...
### [Integrations](/v0.1/docs/integrations/platforms/)[](#integrations "Direct link to integrations")
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.1/docs/integrations/platforms/).
### [API reference](https://api.js.langchain.com)[](#api-reference "Direct link to api-reference")
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental packages.
### [Developer's guide](/v0.1/docs/contributing/)[](#developers-guide "Direct link to developers-guide")
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
### [Community](/v0.1/docs/community/)[](#community "Direct link to community")
Head to the [Community navigator](/v0.1/docs/community/) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM's.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Get started
](/v0.1/docs/get_started/)[
Next
Installation
](/v0.1/docs/get_started/installation/)
* [LangChain Libraries](#langchain-libraries)
* [Get started](#get-started)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Modules](#modules)
* [Examples, ecosystem, and resources](#examples-ecosystem-and-resources)
* [Use cases](#use-cases)
* [Integrations](#integrations)
* [API reference](#api-reference)
* [Developer's guide](#developers-guide)
* [Community](#community)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/langgraph/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* LangGraph
On this page
🦜🕸️LangGraph.js
=================
⚡ Building language agents as graphs ⚡
Overview[](#overview "Direct link to Overview")
------------------------------------------------
LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) [LangChain.js](https://github.com/langchain-ai/langchainjs). It extends the [LangChain Expression Language](/v0.1/docs/expression_language/) with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The current interface exposed is one inspired by [NetworkX](https://networkx.org/documentation/latest/).
The main use is for adding **cycles** to your LLM application. Crucially, LangGraph is NOT optimized for only **DAG** workflows. If you want to build a DAG, you should use just use [LangChain Expression Language](/v0.1/docs/expression_language/).
Cycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.
> Looking for the Python version? Click [here](https://github.com/langchain-ai/langgraph).
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
npm install @langchain/langgraph
Quick start[](#quick-start "Direct link to Quick start")
---------------------------------------------------------
One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in the graph as they execute, and each node updates this internal state with its return value after it executes. The way that the graph updates its internal state is defined by either the type of graph chosen or a custom function.
State in LangGraph can be pretty general, but to keep things simpler to start, we'll show off an example where the graph's state is limited to a list of chat messages using the built-in `MessageGraph` class. This is convenient when using LangGraph with LangChain chat models because we can return chat model output directly.
First, install the LangChain OpenAI integration package:
npm i @langchain/openai
We also need to export some environment variables:
export OPENAI_API_KEY=sk-...
And now we're ready! The graph below contains a single node called `"oracle"` that executes a chat model, then returns the result:
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage, BaseMessage } from "@langchain/core/messages";import { END, MessageGraph } from "@langchain/langgraph";const model = new ChatOpenAI({ temperature: 0 });const graph = new MessageGraph();graph.addNode("oracle", async (state: BaseMessage[]) => { return model.invoke(state);});graph.addEdge("oracle", END);graph.setEntryPoint("oracle");const runnable = graph.compile();
Let's run it!
// For Message graph, input should always be a message or list of messages.const res = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
[ HumanMessage { content: 'What is 1 + 1?', additional_kwargs: {} }, AIMessage { content: '1 + 1 equals 2.', additional_kwargs: { function_call: undefined, tool_calls: undefined } }]
So what did we do here? Let's break it down step by step:
1. First, we initialize our model and a `MessageGraph`.
2. Next, we add a single node to the graph, called `"oracle"`, which simply calls the model with the given input.
3. We add an edge from this `"oracle"` node to the special value `END`. This means that execution will end after current node.
4. We set `"oracle"` as the entrypoint to the graph.
5. We compile the graph, ensuring that no more modifications to it can be made.
Then, when we execute the graph:
1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, `"oracle"`.
2. The `"oracle"` node executes, invoking the chat model.
3. The chat model returns an `AIMessage`. LangGraph adds this to the state.
4. Execution progresses to the special `END` value and outputs the final state.
And as a result, we get a list of two chat messages as output.
### Interaction with LCEL[](#interaction-with-lcel "Direct link to Interaction with LCEL")
As an aside for those already familiar with LangChain - `addNode` actually takes any runnable as input. In the above example, the passed function is automatically converted, but we could also have passed the model directly:
graph.addNode("oracle", model);
In which case the `.invoke()` method will be called when the graph executes.
Just make sure you are mindful of the fact that the input to the runnable is the entire current state. So this will fail:
// This will NOT work with MessageGraph!import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant who always speaks in pirate dialect"], MessagesPlaceholder("messages"),]);const chain = prompt.pipe(model);// State is a list of messages, but our chain expects an object input://// { messages: [] }//// Therefore, the graph will throw an exception when it executes here.graph.addNode("oracle", chain);
Conditional edges[](#conditional-edges "Direct link to Conditional edges")
---------------------------------------------------------------------------
Now, let's move onto something a little bit less trivial. Because math can be difficult for LLMs, let's allow the LLM to conditionally call a calculator node using tool calling.
npm i langchain @langchain/openai
We'll recreate our graph with an additional `"calculator"` that will take the result of the most recent message, if it is a math expression, and calculate the result. We'll also bind the calculator to the OpenAI model as a tool to allow the model to optionally use the tool if it deems necessary:
import { ToolMessage } from "@langchain/core/messages";import { Calculator } from "langchain/tools/calculator";import { convertToOpenAITool } from "@langchain/core/utils/function_calling";const model = new ChatOpenAI({ temperature: 0,}).bind({ tools: [convertToOpenAITool(new Calculator())], tool_choice: "auto",});const graph = new MessageGraph();graph.addNode("oracle", async (state: BaseMessage[]) => { return model.invoke(state);});graph.addNode("calculator", async (state: BaseMessage[]) => { const tool = new Calculator(); const toolCalls = state[state.length - 1].additional_kwargs.tool_calls ?? []; const calculatorCall = toolCalls.find( (toolCall) => toolCall.function.name === "calculator" ); if (calculatorCall === undefined) { throw new Error("No calculator input found."); } const result = await tool.invoke( JSON.parse(calculatorCall.function.arguments) ); return new ToolMessage({ tool_call_id: calculatorCall.id, content: result, });});graph.addEdge("calculator", END);graph.setEntryPoint("oracle");
Now let's think - what do we want to have happen?
* If the `"oracle"` node returns a message expecting a tool call, we want to execute the `"calculator"` node
* If not, we can just end execution
We can achieve this using **conditional edges**, which routes execution to a node based on the current state using a function.
Here's what that looks like:
const router = (state: BaseMessage[]) => { const toolCalls = state[state.length - 1].additional_kwargs.tool_calls ?? []; if (toolCalls.length) { return "calculator"; } else { return "end"; }};graph.addConditionalEdges("oracle", router, { calculator: "calculator", end: END,});
If the model output contains a tool call, we move to the `"calculator"` node. Otherwise, we end.
Great! Now all that's left is to compile the graph and try it out. Math-related questions are routed to the calculator tool:
const runnable = graph.compile();const mathResponse = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
[ HumanMessage { content: 'What is 1 + 1?', additional_kwargs: {} }, AIMessage { content: '', additional_kwargs: { function_call: undefined, tool_calls: [Array] } }, ToolMessage { content: '2', name: undefined, additional_kwargs: {}, tool_call_id: 'call_P7KWQoftVsj6fgsqKyolWp91' }]
While conversational responses are outputted directly:
const otherResponse = await runnable.invoke( new HumanMessage("What is your name?"));
[ HumanMessage { content: 'What is your name?', additional_kwargs: {} }, AIMessage { content: 'My name is Assistant. How can I assist you today?', additional_kwargs: { function_call: undefined, tool_calls: undefined } }]
Cycles[](#cycles "Direct link to Cycles")
------------------------------------------
Now, let's go over a more general example with a cycle. We will recreate the [`AgentExecutor`](/v0.1/docs/modules/agents/concepts/#agentexecutor) class from LangChain.
The benefits of creating it with LangGraph is that it is more modifiable.
We will need to install some LangChain packages:
npm install langchain @langchain/core @langchain/community @langchain/openai
We also need additional environment variables.
export OPENAI_API_KEY=sk-...export TAVILY_API_KEY=tvly-...
Optionally, we can set up [LangSmith](https://docs.smith.langchain.com/) for best-in-class observability.
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY=ls__...export LANGCHAIN_ENDPOINT=https://api.langchain.com
### Set up the tools[](#set-up-the-tools "Direct link to Set up the tools")
As above, we will first define the tools we want to use. For this simple example, we will use a built-in search tool via Tavily. However, it is really easy to create your own tools - see documentation [here](/v0.1/docs/modules/agents/tools/dynamic/) on how to do that.
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const tools = [new TavilySearchResults({ maxResults: 1 })];
We can now wrap these tools in a ToolExecutor, which simply takes in a ToolInvocation and calls that tool, returning the output.
A ToolInvocation is any type with `tool` and `toolInput` attribute.
import { ToolExecutor } from "@langchain/langgraph/prebuilt";const toolExecutor = new ToolExecutor({ tools });
### Set up the model[](#set-up-the-model "Direct link to Set up the model")
Now we need to load the chat model we want to use. This time, we'll use the older function calling interface. This walkthrough will use OpenAI, but we can choose any model that supports OpenAI function calling.
import { ChatOpenAI } from "@langchain/openai";// We will set streaming: true so that we can stream tokens// See the streaming section for more information on this.const model = new ChatOpenAI({ temperature: 0, streaming: true,});
After we've done this, we should make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI function calling, and then bind them to the model class.
import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";const toolsAsOpenAIFunctions = tools.map((tool) => convertToOpenAIFunction(tool));const newModel = model.bind({ functions: toolsAsOpenAIFunctions,});
### Define the agent state[](#define-the-agent-state "Direct link to Define the agent state")
This time, we'll use the more general `StateGraph`. This graph is parameterized by a state object that it passes around to each node. Remember that each node then returns operations to update that state. These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute. Whether to set or add is denoted by annotating the state object you construct the graph with.
For this example, the state we will track will just be a list of messages. We want each node to just add messages to that list. Therefore, we will use an object with one key (`messages`) with the value as an object: `{ value: Function, default?: () => any }`
The `default` key must be a factory that returns the default value for that attribute.
import { BaseMessage } from "@langchain/core/messages";const agentState = { messages: { value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y), default: () => [], },};
You can think of the `MessageGraph` used in the initial example as a preconfigured version of this graph. The difference is that the state is directly a list of messages, instead of an object containing a key called `"messages"` whose value is a list of messages. The `MessageGraph` update step is similar to the one above where we always append the returned values of a node to the internal state.
### Define the nodes[](#define-the-nodes "Direct link to Define the nodes")
We now need to define a few different nodes in our graph. In LangGraph, a node can be either a function or a [runnable](/v0.1/docs/expression_language/). There are two main nodes we need for this:
1. The agent: responsible for deciding what (if any) actions to take.
2. A function to invoke tools: if the agent decides to take an action, this node will then execute that action.
We will also need to define some edges. Some of these edges may be conditional. The reason they are conditional is that based on the output of a node, one of several paths may be taken. The path that is taken is not known until that node is run (the LLM decides).
1. Conditional Edge: after the agent is called, we should either: a. If the agent said to take an action, then the function to invoke tools should be called b. If the agent said that it was finished, then it should finish
2. Normal Edge: after the tools are invoked, it should always go back to the agent to decide what to do next
Let's define the nodes, as well as a function to decide how what conditional edge to take.
import { FunctionMessage } from "@langchain/core/messages";import { AgentAction } from "@langchain/core/agents";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";// Define the function that determines whether to continue or notconst shouldContinue = (state: { messages: Array<BaseMessage> }) => { const { messages } = state; const lastMessage = messages[messages.length - 1]; // If there is no function call, then we finish if ( !("function_call" in lastMessage.additional_kwargs) || !lastMessage.additional_kwargs.function_call ) { return "end"; } // Otherwise if there is, we continue return "continue";};// Define the function to execute toolsconst _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => { const { messages } = state; // Based on the continue condition // we know the last message involves a function call const lastMessage = messages[messages.length - 1]; if (!lastMessage) { throw new Error("No messages found."); } if (!lastMessage.additional_kwargs.function_call) { throw new Error("No function call found in message."); } // We construct an AgentAction from the function_call return { tool: lastMessage.additional_kwargs.function_call.name, toolInput: JSON.parse( lastMessage.additional_kwargs.function_call.arguments ), log: "", };};// Define the function that calls the modelconst callModel = async (state: { messages: Array<BaseMessage> }) => { const { messages } = state; // You can use a prompt here to tweak model behavior. // You can also just pass messages to the model directly. const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant."], new MessagesPlaceholder("messages"), ]); const response = await prompt.pipe(newModel).invoke({ messages }); // We return a list, because this will get added to the existing list return { messages: [response], };};const callTool = async (state: { messages: Array<BaseMessage> }) => { const action = _getAction(state); // We call the tool_executor and get back a response const response = await toolExecutor.invoke(action); // We use the response to create a FunctionMessage const functionMessage = new FunctionMessage({ content: response, name: action.tool, }); // We return a list, because this will get added to the existing list return { messages: [functionMessage] };};
### Define the graph[](#define-the-graph "Direct link to Define the graph")
We can now put it all together and define the graph!
import { StateGraph, END } from "@langchain/langgraph";import { RunnableLambda } from "@langchain/core/runnables";// Define a new graphconst workflow = new StateGraph({ channels: agentState,});// Define the two nodes we will cycle betweenworkflow.addNode("agent", callModel);workflow.addNode("action", callTool);// Set the entrypoint as `agent`// This means that this node is the first one calledworkflow.setEntryPoint("agent");// We now add a conditional edgeworkflow.addConditionalEdges( // First, we define the start node. We use `agent`. // This means these are the edges taken after the `agent` node is called. "agent", // Next, we pass in the function that will determine which node is called next. shouldContinue, // Finally we pass in a mapping. // The keys are strings, and the values are other nodes. // END is a special node marking that the graph should finish. // What will happen is we will call `should_continue`, and then the output of that // will be matched against the keys in this mapping. // Based on which one it matches, that node will then be called. { // If `tools`, then we call the tool node. continue: "action", // Otherwise we finish. end: END, });// We now add a normal edge from `tools` to `agent`.// This means that after `tools` is called, `agent` node is called next.workflow.addEdge("action", "agent");// Finally, we compile it!// This compiles it into a LangChain Runnable,// meaning you can use it as you would any other runnableconst app = workflow.compile();
### Use it![](#use-it "Direct link to Use it!")
We can now use it! This now exposes the [same interface](/v0.1/docs/expression_language/) as all other LangChain runnables. This runnable accepts a list of messages.
import { HumanMessage } from "@langchain/core/messages";const inputs = { messages: [new HumanMessage("what is the weather in sf")],};const result = await app.invoke(inputs);
See a LangSmith trace of this run [here](https://smith.langchain.com/public/144af8a3-b496-43aa-ba9d-f0d5894196e2/r).
This may take a little bit - it's making a few calls behind the scenes. In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
LangGraph has support for several different types of streaming.
### Streaming Node Output[](#streaming-node-output "Direct link to Streaming Node Output")
One of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.
const inputs = { messages: [new HumanMessage("what is the weather in sf")],};for await (const output of await app.stream(inputs)) { console.log("output", output); console.log("-----\n");}
See a LangSmith trace of this run [here](https://smith.langchain.com/public/968cd1bf-0db2-410f-a5b4-0e73066cf06e/r).
Running Examples[](#running-examples "Direct link to Running Examples")
------------------------------------------------------------------------
You can find some more [example notebooks of different use-cases in the `examples/` folder](https://github.com/langchain-ai/langgraphjs/tree/main/examples) in the LangGraph repo. These example notebooks use the [Deno runtime](https://deno.land/).
To pull in environment variables, you can create a `.env` file at the **root** of this repo (not in the `examples/` folder itself).
When to Use[](#when-to-use "Direct link to When to Use")
---------------------------------------------------------
When should you use this versus [LangChain Expression Language](/v0.1/docs/expression_language/)?
If you need cycles.
Langchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles. `langgraph` adds that syntax.
Examples[](#examples "Direct link to Examples")
------------------------------------------------
### ChatAgentExecutor: with function calling[](#chatagentexecutor-with-function-calling "Direct link to ChatAgentExecutor: with function calling")
This agent executor takes a list of messages as input and outputs a list of messages. All agent state is represented as a list of messages. This specifically uses OpenAI function calling. This is recommended agent executor for newer chat based models that support function calling.
* [Getting Started Notebook](https://github.com/langchain-ai/langgraphjs/blob/main/examples/chat_agent_executor_with_function_calling/base.ipynb): Walks through creating this type of executor from scratch
### AgentExecutor[](#agentexecutor "Direct link to AgentExecutor")
This agent executor uses existing LangChain agents.
* [Getting Started Notebook](https://github.com/langchain-ai/langgraphjs/blob/main/examples/agent_executor/base.ipynb): Walks through creating this type of executor from scratch
### Multi-agent Examples[](#multi-agent-examples "Direct link to Multi-agent Examples")
* [Multi-agent collaboration](https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/multi_agent_collaboration.ipynb): how to create two agents that work together to accomplish a task
* [Multi-agent with supervisor](https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/agent_supervisor.ipynb): how to orchestrate individual agents by using an LLM as a "supervisor" to distribute work
* [Hierarchical agent teams](https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/hierarchical_agent_teams.ipynb): how to orchestrate "teams" of agents as nested graphs that can collaborate to solve a problem
Documentation[](#documentation "Direct link to Documentation")
---------------------------------------------------------------
There are only a few new APIs to use.
### StateGraph[](#stategraph "Direct link to StateGraph")
The main entrypoint is `StateGraph`.
import { StateGraph } from "@langchain/langgraph";
This class is responsible for constructing the graph. It exposes an interface inspired by [NetworkX](https://networkx.org/documentation/latest/). This graph is parameterized by a state object that it passes around to each node.
#### `constructor`[](#constructor "Direct link to constructor")
interface StateGraphArgs<T = any> { channels: Record< string, { value: BinaryOperator<T> | null; default?: () => T; } >;}class StateGraph<T> extends Graph { constructor(fields: StateGraphArgs<T>) {}
When constructing the graph, you need to pass in a schema for a state. Each node then returns operations to update that state. These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute. Whether to set or add is denoted by annotating the state object you construct the graph with.
Let's take a look at an example:
import { BaseMessage } from "@langchain/core/messages";const schema = { input: { value: null, }, agentOutcome: { value: null, }, steps: { value: (x: Array<BaseMessage>, y: Array<BaseMessage>) => x.concat(y), default: () => [], },};
We can then use this like:
// Initialize the StateGraph with this stateconst graph = new StateGraph({ channels: schema })// Create nodes and edges...// Compile the graphconst app = graph.compile()// The inputs should be an object, because the schema is an objectconst inputs = { // Let's assume this the input input: "hi" // Let's assume agent_outcome is set by the graph as some point // It doesn't need to be provided, and it will be null by default}
### `.addNode`[](#addnode "Direct link to addnode")
addNode(key: string, action: RunnableLike<RunInput, RunOutput>): void
This method adds a node to the graph. It takes two arguments:
* `key`: A string representing the name of the node. This must be unique.
* `action`: The action to take when this node is called. This should either be a function or a runnable.
### `.addEdge`[](#addedge "Direct link to addedge")
addEdge(startKey: string, endKey: string): void
Creates an edge from one node to the next. This means that output of the first node will be passed to the next node. It takes two arguments.
* `startKey`: A string representing the name of the start node. This key must have already been registered in the graph.
* `endKey`: A string representing the name of the end node. This key must have already been registered in the graph.
### `.addConditionalEdges`[](#addconditionaledges "Direct link to addconditionaledges")
addConditionalEdges( startKey: string, condition: CallableFunction, conditionalEdgeMapping: Record<string, string>): void
This method adds conditional edges. What this means is that only one of the downstream edges will be taken, and which one that is depends on the results of the start node. This takes three arguments:
* `startKey`: A string representing the name of the start node. This key must have already been registered in the graph.
* `condition`: A function to call to decide what to do next. The input will be the output of the start node. It should return a string that is present in `conditionalEdgeMapping` and represents the edge to take.
* `conditionalEdgeMapping`: A mapping of string to string. The keys should be strings that may be returned by `condition`. The values should be the downstream node to call if that condition is returned.
### `.setEntryPoint`[](#setentrypoint "Direct link to setentrypoint")
setEntryPoint(key: string): void
The entrypoint to the graph. This is the node that is first called. It only takes one argument:
* `key`: The name of the node that should be called first.
### `.setFinishPoint`[](#setfinishpoint "Direct link to setfinishpoint")
setFinishPoint(key: string): void
This is the exit point of the graph. When this node is called, the results will be the final result from the graph. It only has one argument:
* `key`: The name of the node that, when called, will return the results of calling it as the final output
Note: This does not need to be called if at any point you previously created an edge (conditional or normal) to `END`
### `END`[](#end "Direct link to end")
import { END } from "@langchain/langgraph";
This is a special node representing the end of the graph. This means that anything passed to this node will be the final output of the graph. It can be used in two places:
* As the `endKey` in `addEdge`
* As a value in `conditionalEdgeMapping` as passed to `addConditionalEdges`
Examples[](#examples-1 "Direct link to Examples")
--------------------------------------------------
### AgentExecutor[](#agentexecutor-1 "Direct link to AgentExecutor")
See the above Quick Start for an example of re-creating the LangChain [`AgentExecutor`](/v0.1/docs/modules/agents/concepts/#agentexecutor) class.
### Forced Function Calling[](#forced-function-calling "Direct link to Forced Function Calling")
One simple modification of the above Graph is to modify it such that a certain tool is always called first. This can be useful if you want to enforce a certain tool is called, but still want to enable agentic behavior after the fact.
Assuming you have done the above Quick Start, you can build off it like:
#### Define the first tool call[](#define-the-first-tool-call "Direct link to Define the first tool call")
Here, we manually define the first tool call that we will make. Notice that it does that same thing as `agent` would have done (adds the `agentOutcome` key). This is so that we can easily plug it in.
import { AgentStep, AgentAction, AgentFinish } from "@langchain/core/agents";// Define the data type that the agent will return.type AgentData = { input: string; steps: Array<AgentStep>; agentOutcome?: AgentAction | AgentFinish;};const firstAgent = (inputs: AgentData) => { const newInputs = inputs; const action = { // We force call this tool tool: "tavily_search_results_json", // We just pass in the `input` key to this tool toolInput: newInputs.input, log: "", }; newInputs.agentOutcome = action; return newInputs;};
#### Create the graph[](#create-the-graph "Direct link to Create the graph")
We can now create a new graph with this new node
const workflow = new Graph();// Add the same nodes as before, plus this "first agent"workflow.addNode("firstAgent", firstAgent);workflow.addNode("agent", agent);workflow.addNode("tools", executeTools);// We now set the entry point to be this first agentworkflow.setEntryPoint("firstAgent");// We define the same edges as beforeworkflow.addConditionalEdges("agent", shouldContinue, { continue: "tools", exit: END,});workflow.addEdge("tools", "agent");// We also define a new edge, from the "first agent" to the tools node// This is so that we can call the toolworkflow.addEdge("firstAgent", "tools");// We now compile the graph as beforeconst chain = workflow.compile();
#### Use it![](#use-it-1 "Direct link to Use it!")
We can now use it as before! Depending on whether or not the first tool call is actually useful, this may save you an LLM call or two.
const result = await chain.invoke({ input: "what is the weather in sf", steps: [],});
You can see a LangSmith trace of this chain [here](https://smith.langchain.com/public/2e0a089f-8c05-405a-8404-b0a60b79a84a/r).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Integrating with LangServe
](/v0.1/docs/ecosystem/langserve/)
* [Overview](#overview)
* [Installation](#installation)
* [Quick start](#quick-start)
* [Interaction with LCEL](#interaction-with-lcel)
* [Conditional edges](#conditional-edges)
* [Cycles](#cycles)
* [Set up the tools](#set-up-the-tools)
* [Set up the model](#set-up-the-model)
* [Define the agent state](#define-the-agent-state)
* [Define the nodes](#define-the-nodes)
* [Define the graph](#define-the-graph)
* [Use it!](#use-it)
* [Streaming](#streaming)
* [Streaming Node Output](#streaming-node-output)
* [Running Examples](#running-examples)
* [When to Use](#when-to-use)
* [Examples](#examples)
* [ChatAgentExecutor: with function calling](#chatagentexecutor-with-function-calling)
* [AgentExecutor](#agentexecutor)
* [Multi-agent Examples](#multi-agent-examples)
* [Documentation](#documentation)
* [StateGraph](#stategraph)
* [`.addNode`](#addnode)
* [`.addEdge`](#addedge)
* [`.addConditionalEdges`](#addconditionaledges)
* [`.setEntryPoint`](#setentrypoint)
* [`.setFinishPoint`](#setfinishpoint)
* [`END`](#end)
* [Examples](#examples-1)
* [AgentExecutor](#agentexecutor-1)
* [Forced Function Calling](#forced-function-calling)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/question_answering/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Quickstart](/v0.1/docs/use_cases/question_answering/quickstart/)
* [Per-User Retrieval](/v0.1/docs/use_cases/question_answering/per_user/)
* [Add chat history](/v0.1/docs/use_cases/question_answering/chat_history/)
* [Citations](/v0.1/docs/use_cases/question_answering/citations/)
* [Using agents](/v0.1/docs/use_cases/question_answering/conversational_retrieval_agents/)
* [Using local models](/v0.1/docs/use_cases/question_answering/local_retrieval_qa/)
* [Returning sources](/v0.1/docs/use_cases/question_answering/sources/)
* [Streaming](/v0.1/docs/use_cases/question_answering/streaming/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* Q&A with RAG
On this page
Q&A with RAG
============
Overview[](#overview "Direct link to Overview")
------------------------------------------------
One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG.
### What is RAG?[](#what-is-rag "Direct link to What is RAG?")
RAG is a technique for augmenting LLM knowledge with additional data.
LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data up to a specific point in time that they were trained on. If you want to build AI applications that can reason about private data or data introduced after a model's cutoff date, you need to augment the knowledge of the model with the specific information it needs. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG).
LangChain has a number of components designed to help build Q&A applications, and RAG applications more generally.
Note: Here we focus on Q&A for unstructured data. Two RAG use cases which we cover elsewhere are:
* [Q&A over SQL data](/v0.1/docs/use_cases/sql/)
* [Q&A over code](/v0.1/docs/use_cases/code_understanding/) (e.g., TypeScript)
RAG Architecture[](#rag-architecture "Direct link to RAG Architecture")
------------------------------------------------------------------------
A typical RAG application has two main components:
**Indexing**: a pipeline for ingesting data from a source and indexing it. _This usually happens offline_.
**Retrieval and generation**: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.
The most common full sequence from raw data to answer looks like:
**Indexing**
1. **Load**: First we need to load our data. This is done with [DocumentLoaders](/v0.1/docs/modules/data_connection/document_loaders/).
2. **Split**: [Text splitters](/v0.1/docs/modules/data_connection/document_transformers/) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.
3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/v0.1/docs/modules/data_connection/vectorstores/) and [Embeddings](/v0.1/docs/modules/data_connection/text_embedding/) model.
![Indexing](/v0.1/assets/images/rag_indexing-8160f90a90a33253d0154659cf7d453f.png)
**Retrieval and generation**
1. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/v0.1/docs/modules/data_connection/retrievers/).
2. **Generate**: A [ChatModel](/v0.1/docs/modules/model_io/chat/) / [LLM](/v0.1/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data
![Retrieval generation](/v0.1/assets/images/rag_retrieval_generation-1046a4668d6bb08786ef73c56d4f228a.png)
Table of contents[](#table-of-contents "Direct link to Table of contents")
---------------------------------------------------------------------------
* [Quickstart](/v0.1/docs/use_cases/question_answering/quickstart/): We recommend starting here. Many of the following guides assume you fully understand the architecture shown in the Quickstart.
* [Returning sources](/v0.1/docs/use_cases/question_answering/sources/): How to return the source documents used in a particular generation.
* [Citations](/v0.1/docs/use_cases/question_answering/citations/): How to cite which parts of the source documents are referenced in a particular generation.
* [Streaming](/v0.1/docs/use_cases/question_answering/streaming/): How to stream final answers as well as intermediate steps.
* [Adding chat history](/v0.1/docs/use_cases/question_answering/chat_history/): How to add chat history to a Q&A app.
* [Per-user retrieval](/v0.1/docs/use_cases/question_answering/per_user/): How to do retrieval when each user has their own private data.
* [Using agents](/v0.1/docs/use_cases/question_answering/conversational_retrieval_agents/): How to use agents for Q&A.
* [Using local models](/v0.1/docs/use_cases/question_answering/local_retrieval_qa/): How to use local models for Q&A.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Deal with High Cardinality Categoricals
](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)[
Next
Quickstart
](/v0.1/docs/use_cases/question_answering/quickstart/)
* [Overview](#overview)
* [What is RAG?](#what-is-rag)
* [RAG Architecture](#rag-architecture)
* [Table of contents](#table-of-contents)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* Use cases
Use cases
=========
Walkthroughs of common end-to-end use cases
[
🗃️ SQL
-------
5 items
](/v0.1/docs/use_cases/sql/)
[
🗃️ Chatbots
------------
4 items
](/v0.1/docs/use_cases/chatbots/)
[
🗃️ Extraction
--------------
3 items
](/v0.1/docs/use_cases/extraction/)
[
🗃️ Query Analysis
------------------
3 items
](/v0.1/docs/use_cases/query_analysis/)
[
🗃️ Q&A with RAG
----------------
8 items
](/v0.1/docs/use_cases/question_answering/)
[
🗃️ Tool use
------------
6 items
](/v0.1/docs/use_cases/tool_use/)
[
📄️ Interacting with APIs
-------------------------
Lots of data and information is stored behind APIs.
](/v0.1/docs/use_cases/api/)
[
📄️ Tabular Question Answering
------------------------------
Lots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables.
](/v0.1/docs/use_cases/tabular/)
[
🗃️ Graphs
----------
5 items
](/v0.1/docs/use_cases/graph/)
[
📄️ Summarization
-----------------
A common use case is wanting to summarize long documents.
](/v0.1/docs/use_cases/summarization/)
[
🗃️ Agent Simulations
---------------------
2 items
](/v0.1/docs/use_cases/agent_simulations/)
[
🗃️ Autonomous Agents
---------------------
3 items
](/v0.1/docs/use_cases/autonomous_agents/)
[
📄️ Code Understanding
----------------------
Use case
](/v0.1/docs/use_cases/code_understanding/)
[
📄️ Audio/Video Structured Extraction
-------------------------------------
Google's Gemini API offers support for audio and video input, along with function calling.
](/v0.1/docs/use_cases/media/)
[
Next
SQL
](/v0.1/docs/use_cases/sql/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/autonomous_agents/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [SalesGPT](/v0.1/docs/use_cases/autonomous_agents/sales_gpt/)
* [AutoGPT](/v0.1/docs/use_cases/autonomous_agents/auto_gpt/)
* [BabyAGI](/v0.1/docs/use_cases/autonomous_agents/baby_agi/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* Autonomous Agents
Autonomous Agents
=================
Autonomous Agents are agents that designed to be more long running. You give them one or multiple long term goals, and they independently execute towards those goals. The applications combine tool usage and long term memory.
At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects. By implementing these open source projects in LangChain primitives we can get the benefits of LangChain - easy switching and experimenting with multiple LLMs, usage of different vectorstores as memory, usage of LangChain's collection of tools.
[
📄️ SalesGPT
------------
This notebook demonstrates an implementation of a Context-Aware AI Sales agent with a Product Knowledge Base.
](/v0.1/docs/use_cases/autonomous_agents/sales_gpt/)
[
📄️ AutoGPT
-----------
Original Repo//github.com/Significant-Gravitas/Auto-GPT
](/v0.1/docs/use_cases/autonomous_agents/auto_gpt/)
[
📄️ BabyAGI
-----------
Original Repo//github.com/yoheinakajima/babyagi
](/v0.1/docs/use_cases/autonomous_agents/baby_agi/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Violation of Expectations Chain
](/v0.1/docs/use_cases/agent_simulations/violation_of_expectations_chain/)[
Next
SalesGPT
](/v0.1/docs/use_cases/autonomous_agents/sales_gpt/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/platforms/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* Providers
On this page
Providers
=========
LangChain integrates with many providers.
Partner Packages[](#partner-packages "Direct link to Partner Packages")
------------------------------------------------------------------------
These providers have standalone `@langchain/{provider}` packages for improved versioning, dependency management and testing.
* [Anthropic](https://www.npmjs.com/package/@langchain/anthropic)
* [Azure OpenAI](https://www.npmjs.com/package/@langchain/azure-openai)
* [Cloudflare](https://www.npmjs.com/package/@langchain/cloudflare)
* [Cohere](https://www.npmjs.com/package/@langchain/cohere)
* [Exa](https://www.npmjs.com/package/@langchain/exa)
* [Google GenAI](https://www.npmjs.com/package/@langchain/google-genai)
* [Google VertexAI](https://www.npmjs.com/package/@langchain/google-vertexai)
* [Google VertexAI Web](https://www.npmjs.com/package/@langchain/google-vertexai-web)
* [Groq](https://www.npmjs.com/package/@langchain/groq)
* [MistralAI](https://www.npmjs.com/package/@langchain/mistralai)
* [MongoDB](https://www.npmjs.com/package/@langchain/mongodb)
* [Nomic](https://www.npmjs.com/package/@langchain/nomic)
* [OpenAI](https://www.npmjs.com/package/@langchain/openai)
* [Pinecone](https://www.npmjs.com/package/@langchain/pinecone)
* [Redis](https://www.npmjs.com/package/@langchain/redis)
* [Weaviate](https://www.npmjs.com/package/@langchain/weaviate)
* [Yandex](https://www.npmjs.com/package/@langchain/yandex)
* * *
#### Help us out by providing feedback on this documentation page:
[
Next
Providers
](/v0.1/docs/integrations/platforms/)
* [Partner Packages](#partner-packages)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/people/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
People
======
There are some incredible humans from all over the world who have been instrumental in helping the LangChain.js community flourish 🌐!
This page highlights a few of those folks who have dedicated their time to the open-source repo in the form of direct contributions and reviews.
Top reviewers[](#top-reviewers "Direct link to Top reviewers")
---------------------------------------------------------------
As LangChain.js has grown, the amount of surface area that maintainers cover has grown as well.
Thank you to the following folks who have gone above and beyond in reviewing incoming PRs 🙏!
[![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg)
[![Avatar for sullivan-sean](https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4)](https://github.com/sullivan-sean)[@sullivan-sean](https://github.com/sullivan-sean)
[![Avatar for ppramesi](https://avatars.githubusercontent.com/u/6775031?v=4)](https://github.com/ppramesi)[@ppramesi](https://github.com/ppramesi)
[![Avatar for jacobrosenthal](https://avatars.githubusercontent.com/u/455796?v=4)](https://github.com/jacobrosenthal)[@jacobrosenthal](https://github.com/jacobrosenthal)
[![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo)
[![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep)
Top recent contributors[](#top-recent-contributors "Direct link to Top recent contributors")
---------------------------------------------------------------------------------------------
The list below contains contributors who have had the most PRs merged in the last three months, weighted (imperfectly) by impact.
Thank you all so much for your time and efforts in making LangChain.js better ❤️!
[![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg)
[![Avatar for sinedied](https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4)](https://github.com/sinedied)[@sinedied](https://github.com/sinedied)
[![Avatar for lokesh-couchbase](https://avatars.githubusercontent.com/u/113521973?v=4)](https://github.com/lokesh-couchbase)[@lokesh-couchbase](https://github.com/lokesh-couchbase)
[![Avatar for nicoloboschi](https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4)](https://github.com/nicoloboschi)[@nicoloboschi](https://github.com/nicoloboschi)
[![Avatar for MJDeligan](https://avatars.githubusercontent.com/u/48515433?v=4)](https://github.com/MJDeligan)[@MJDeligan](https://github.com/MJDeligan)
[![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo)
[![Avatar for lukywong](https://avatars.githubusercontent.com/u/1433871?v=4)](https://github.com/lukywong)[@lukywong](https://github.com/lukywong)
[![Avatar for rahilvora](https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4)](https://github.com/rahilvora)[@rahilvora](https://github.com/rahilvora)
[![Avatar for davidfant](https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4)](https://github.com/davidfant)[@davidfant](https://github.com/davidfant)
[![Avatar for easwee](https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4)](https://github.com/easwee)[@easwee](https://github.com/easwee)
[![Avatar for fahreddinozcan](https://avatars.githubusercontent.com/u/88107904?v=4)](https://github.com/fahreddinozcan)[@fahreddinozcan](https://github.com/fahreddinozcan)
[![Avatar for karol-f](https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4)](https://github.com/karol-f)[@karol-f](https://github.com/karol-f)
[![Avatar for janvi-kalra](https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4)](https://github.com/janvi-kalra)[@janvi-kalra](https://github.com/janvi-kalra)
[![Avatar for Anush008](https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4)](https://github.com/Anush008)[@Anush008](https://github.com/Anush008)
[![Avatar for cinqisap](https://avatars.githubusercontent.com/u/158295355?v=4)](https://github.com/cinqisap)[@cinqisap](https://github.com/cinqisap)
[![Avatar for andrewnguonly](https://avatars.githubusercontent.com/u/7654246?u=b8599019655adaada3cdc3c3006798df42c44494&v=4)](https://github.com/andrewnguonly)[@andrewnguonly](https://github.com/andrewnguonly)
[![Avatar for seuha516](https://avatars.githubusercontent.com/u/79067549?u=de7a2688cb44010afafd055d707f3463585494df&v=4)](https://github.com/seuha516)[@seuha516](https://github.com/seuha516)
[![Avatar for jasonnathan](https://avatars.githubusercontent.com/u/780157?u=d5efec16b5e3a9913dc44967059a70d9a610755d&v=4)](https://github.com/jasonnathan)[@jasonnathan](https://github.com/jasonnathan)
[![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep)
[![Avatar for jeasonnow](https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4)](https://github.com/jeasonnow)[@jeasonnow](https://github.com/jeasonnow)
Core maintainers[](#core-maintainers "Direct link to Core maintainers")
------------------------------------------------------------------------
Hello there 👋!
We're LangChain's core maintainers. If you've spent time in the community, you've probably crossed paths with at least one of us already.
[![Avatar for jacoblee93](https://avatars.githubusercontent.com/u/6952323?u=d785f9406c5a78ebd75922567b2693fb643c3bb0&v=4)](https://github.com/jacoblee93)[@jacoblee93](https://github.com/jacoblee93)
[![Avatar for hwchase17](https://avatars.githubusercontent.com/u/11986836?u=f4c4f21a82b2af6c9f91e1f1d99ea40062f7a101&v=4)](https://github.com/hwchase17)[@hwchase17](https://github.com/hwchase17)
[![Avatar for bracesproul](https://avatars.githubusercontent.com/u/46789226?u=83f467441c4b542b900fe2bb8fe45e26bf918da0&v=4)](https://github.com/bracesproul)[@bracesproul](https://github.com/bracesproul)
[![Avatar for dqbd](https://avatars.githubusercontent.com/u/1443449?u=fe32372ae8f497065ef0a1c54194d9dff36fb81d&v=4)](https://github.com/dqbd)[@dqbd](https://github.com/dqbd)
[![Avatar for nfcampos](https://avatars.githubusercontent.com/u/56902?u=fdb30e802c68bc338dd9c0820f713e4fdac75db7&v=4)](https://github.com/nfcampos)[@nfcampos](https://github.com/nfcampos)
Top all-time contributors[](#top-all-time-contributors "Direct link to Top all-time contributors")
---------------------------------------------------------------------------------------------------
And finally, this is an all-time list of all-stars who have made significant contributions to the framework 🌟:
[![Avatar for afirstenberg](https://avatars.githubusercontent.com/u/3507578?v=4)](https://github.com/afirstenberg)[@afirstenberg](https://github.com/afirstenberg)
[![Avatar for ppramesi](https://avatars.githubusercontent.com/u/6775031?v=4)](https://github.com/ppramesi)[@ppramesi](https://github.com/ppramesi)
[![Avatar for jacobrosenthal](https://avatars.githubusercontent.com/u/455796?v=4)](https://github.com/jacobrosenthal)[@jacobrosenthal](https://github.com/jacobrosenthal)
[![Avatar for sullivan-sean](https://avatars.githubusercontent.com/u/22581534?u=8f88473db2f929a965b6371733efda28e3fa1948&v=4)](https://github.com/sullivan-sean)[@sullivan-sean](https://github.com/sullivan-sean)
[![Avatar for skarard](https://avatars.githubusercontent.com/u/602085?u=f8a9736cfa9fe8875d19861b0276e24de8f3d0a0&v=4)](https://github.com/skarard)[@skarard](https://github.com/skarard)
[![Avatar for tomasonjo](https://avatars.githubusercontent.com/u/19948365?v=4)](https://github.com/tomasonjo)[@tomasonjo](https://github.com/tomasonjo)
[![Avatar for chasemcdo](https://avatars.githubusercontent.com/u/74692158?u=9c25a170d24cc30f10eafc4d44a38067cdf5eed8&v=4)](https://github.com/chasemcdo)[@chasemcdo](https://github.com/chasemcdo)
[![Avatar for MaximeThoonsen](https://avatars.githubusercontent.com/u/4814551?u=efb35c6a7dc1ce99dfa8ac8f0f1314cdb4fddfe1&v=4)](https://github.com/MaximeThoonsen)[@MaximeThoonsen](https://github.com/MaximeThoonsen)
[![Avatar for mieslep](https://avatars.githubusercontent.com/u/5420540?u=8f038c002fbce42427999eb715dc9f868cef1c84&v=4)](https://github.com/mieslep)[@mieslep](https://github.com/mieslep)
[![Avatar for sinedied](https://avatars.githubusercontent.com/u/593151?u=08557bbdd96221813b8aec932dd7de895ac040ea&v=4)](https://github.com/sinedied)[@sinedied](https://github.com/sinedied)
[![Avatar for ysnows](https://avatars.githubusercontent.com/u/11255869?u=b0b519b6565c43d01795ba092521c8677f30134c&v=4)](https://github.com/ysnows)[@ysnows](https://github.com/ysnows)
[![Avatar for tyumentsev4](https://avatars.githubusercontent.com/u/56769451?u=088102b6160822bc68c25a2a5df170080d0b16a2&v=4)](https://github.com/tyumentsev4)[@tyumentsev4](https://github.com/tyumentsev4)
[![Avatar for nickscamara](https://avatars.githubusercontent.com/u/20311743?u=29bf2391ae34297a12a88d813731b0bdf289e4a5&v=4)](https://github.com/nickscamara)[@nickscamara](https://github.com/nickscamara)
[![Avatar for nigel-daniels](https://avatars.githubusercontent.com/u/4641452?v=4)](https://github.com/nigel-daniels)[@nigel-daniels](https://github.com/nigel-daniels)
[![Avatar for MJDeligan](https://avatars.githubusercontent.com/u/48515433?v=4)](https://github.com/MJDeligan)[@MJDeligan](https://github.com/MJDeligan)
[![Avatar for malandis](https://avatars.githubusercontent.com/u/3690240?v=4)](https://github.com/malandis)[@malandis](https://github.com/malandis)
[![Avatar for danielchalef](https://avatars.githubusercontent.com/u/131175?u=332fe36f12d9ffe9e4414dc776b381fe801a9c53&v=4)](https://github.com/danielchalef)[@danielchalef](https://github.com/danielchalef)
[![Avatar for easwee](https://avatars.githubusercontent.com/u/2518825?u=a24026bc5ed35688174b1a36f3c29eda594d38d7&v=4)](https://github.com/easwee)[@easwee](https://github.com/easwee)
[![Avatar for kwkr](https://avatars.githubusercontent.com/u/20127759?v=4)](https://github.com/kwkr)[@kwkr](https://github.com/kwkr)
[![Avatar for ewfian](https://avatars.githubusercontent.com/u/12423122?u=681de0c470e9b349963ee935ddfd6b2e097e7181&v=4)](https://github.com/ewfian)[@ewfian](https://github.com/ewfian)
[![Avatar for Swimburger](https://avatars.githubusercontent.com/u/3382717?u=5a84a173b0e80effc9161502c0848bf06c84bde9&v=4)](https://github.com/Swimburger)[@Swimburger](https://github.com/Swimburger)
[![Avatar for mfortman11](https://avatars.githubusercontent.com/u/6100513?u=c758a02fc05dc36315fcfadfccd6208883436cb8&v=4)](https://github.com/mfortman11)[@mfortman11](https://github.com/mfortman11)
[![Avatar for jasondotparse](https://avatars.githubusercontent.com/u/13938372?u=0e3f80aa515c41b7d9084b73d761cad378ebdc7a&v=4)](https://github.com/jasondotparse)[@jasondotparse](https://github.com/jasondotparse)
[![Avatar for kristianfreeman](https://avatars.githubusercontent.com/u/922353?u=ad00df1efd8f04a469de6087ee3cd7d7012533f7&v=4)](https://github.com/kristianfreeman)[@kristianfreeman](https://github.com/kristianfreeman)
[![Avatar for neebdev](https://avatars.githubusercontent.com/u/94310799?u=b6f604bc6c3a6380f0b83025ca94e2e22179ac2a&v=4)](https://github.com/neebdev)[@neebdev](https://github.com/neebdev)
[![Avatar for tsg](https://avatars.githubusercontent.com/u/101817?u=39f31ff29d2589046148c6ed1c1c923982d86b1a&v=4)](https://github.com/tsg)[@tsg](https://github.com/tsg)
[![Avatar for lokesh-couchbase](https://avatars.githubusercontent.com/u/113521973?v=4)](https://github.com/lokesh-couchbase)[@lokesh-couchbase](https://github.com/lokesh-couchbase)
[![Avatar for nicoloboschi](https://avatars.githubusercontent.com/u/23314389?u=2014e20e246530fa89bd902fe703b6f9e6ecf833&v=4)](https://github.com/nicoloboschi)[@nicoloboschi](https://github.com/nicoloboschi)
[![Avatar for zackproser](https://avatars.githubusercontent.com/u/1769996?u=3555434bbfa99f2267f30ded67a15132e3a7bd27&v=4)](https://github.com/zackproser)[@zackproser](https://github.com/zackproser)
[![Avatar for justindra](https://avatars.githubusercontent.com/u/4289486?v=4)](https://github.com/justindra)[@justindra](https://github.com/justindra)
[![Avatar for vincelwt](https://avatars.githubusercontent.com/u/5092466?u=713f9947e4315b6f0ef62ec5cccd978133006783&v=4)](https://github.com/vincelwt)[@vincelwt](https://github.com/vincelwt)
[![Avatar for cwoolum](https://avatars.githubusercontent.com/u/942415?u=8210ef711d1666ec234db9a0c4a9b32fd9f36593&v=4)](https://github.com/cwoolum)[@cwoolum](https://github.com/cwoolum)
[![Avatar for sunner](https://avatars.githubusercontent.com/u/255413?v=4)](https://github.com/sunner)[@sunner](https://github.com/sunner)
[![Avatar for lukywong](https://avatars.githubusercontent.com/u/1433871?v=4)](https://github.com/lukywong)[@lukywong](https://github.com/lukywong)
[![Avatar for mayooear](https://avatars.githubusercontent.com/u/107035552?u=708ca9b002559f6175803a80a1e47f3e84ba91e2&v=4)](https://github.com/mayooear)[@mayooear](https://github.com/mayooear)
[![Avatar for chitalian](https://avatars.githubusercontent.com/u/26822232?u=accedd106a5e9d8335cb631c1bfe84b8cc494083&v=4)](https://github.com/chitalian)[@chitalian](https://github.com/chitalian)
[![Avatar for rahilvora](https://avatars.githubusercontent.com/u/5127548?u=0cd74312c28da39646785409fb0a37a9b3d3420a&v=4)](https://github.com/rahilvora)[@rahilvora](https://github.com/rahilvora)
[![Avatar for paaatrrrick](https://avatars.githubusercontent.com/u/88113528?u=23275c7b8928a38b34195358ea9f4d057fe1e171&v=4)](https://github.com/paaatrrrick)[@paaatrrrick](https://github.com/paaatrrrick)
[![Avatar for alexleventer](https://avatars.githubusercontent.com/u/3254549?u=794d178a761379e162a1092c556e98a9ec5c2410&v=4)](https://github.com/alexleventer)[@alexleventer](https://github.com/alexleventer)
[![Avatar for 3eif](https://avatars.githubusercontent.com/u/29833473?u=37b8f7a25883ee98bc6b6bd6029c6d5479724e2f&v=4)](https://github.com/3eif)[@3eif](https://github.com/3eif)
[![Avatar for BitVoyagerMan](https://avatars.githubusercontent.com/u/121993229?u=717ed7012c040d5bf3a8ff1fd695a6a4f1ff0626&v=4)](https://github.com/BitVoyagerMan)[@BitVoyagerMan](https://github.com/BitVoyagerMan)
[![Avatar for xixixao](https://avatars.githubusercontent.com/u/1473433?u=c4bf1cf9f8699c8647894cd226c0bf9124bdad58&v=4)](https://github.com/xixixao)[@xixixao](https://github.com/xixixao)
[![Avatar for jo32](https://avatars.githubusercontent.com/u/501632?u=a714d65c000d8f489f9fc2363f9a372b0dba05e3&v=4)](https://github.com/jo32)[@jo32](https://github.com/jo32)
[![Avatar for RohitMidha23](https://avatars.githubusercontent.com/u/38888530?u=5c4b99eff970e551e5b756f270aa5234bc666316&v=4)](https://github.com/RohitMidha23)[@RohitMidha23](https://github.com/RohitMidha23)
[![Avatar for karol-f](https://avatars.githubusercontent.com/u/893082?u=0cda88d40a24ee696580f2e62f5569f49117cf40&v=4)](https://github.com/karol-f)[@karol-f](https://github.com/karol-f)
[![Avatar for konstantinov-raft](https://avatars.githubusercontent.com/u/105433902?v=4)](https://github.com/konstantinov-raft)[@konstantinov-raft](https://github.com/konstantinov-raft)
[![Avatar for volodymyr-memsql](https://avatars.githubusercontent.com/u/57520563?v=4)](https://github.com/volodymyr-memsql)[@volodymyr-memsql](https://github.com/volodymyr-memsql)
[![Avatar for jameshfisher](https://avatars.githubusercontent.com/u/166966?u=b78059abca798fbce8c9da4f6ddfb72ea03b20bb&v=4)](https://github.com/jameshfisher)[@jameshfisher](https://github.com/jameshfisher)
[![Avatar for the-powerpointer](https://avatars.githubusercontent.com/u/134403026?u=ddd77b62b35c5497ae3d846f8917bdd81e5ef19e&v=4)](https://github.com/the-powerpointer)[@the-powerpointer](https://github.com/the-powerpointer)
[![Avatar for davidfant](https://avatars.githubusercontent.com/u/17096641?u=9b935c68c077d53642c1b4aff62f04d08e2ffac7&v=4)](https://github.com/davidfant)[@davidfant](https://github.com/davidfant)
[![Avatar for MthwRobinson](https://avatars.githubusercontent.com/u/1635179?u=0631cb84ca580089198114f94d9c27efe730220e&v=4)](https://github.com/MthwRobinson)[@MthwRobinson](https://github.com/MthwRobinson)
[![Avatar for mishushakov](https://avatars.githubusercontent.com/u/10400064?u=581d97314df325c15ec221f64834003d3bba5cc1&v=4)](https://github.com/mishushakov)[@mishushakov](https://github.com/mishushakov)
[![Avatar for SimonPrammer](https://avatars.githubusercontent.com/u/44960995?u=a513117a60e9f1aa09247ec916018ee272897169&v=4)](https://github.com/SimonPrammer)[@SimonPrammer](https://github.com/SimonPrammer)
[![Avatar for munkhorgil](https://avatars.githubusercontent.com/u/978987?u=eff77a6f7bc4edbace4929731638d4727923013f&v=4)](https://github.com/munkhorgil)[@munkhorgil](https://github.com/munkhorgil)
[![Avatar for alx13](https://avatars.githubusercontent.com/u/1572864?v=4)](https://github.com/alx13)[@alx13](https://github.com/alx13)
[![Avatar for castroCrea](https://avatars.githubusercontent.com/u/20707343?u=25e872c764bd31b71148f2dec896f64be5e034ff&v=4)](https://github.com/castroCrea)[@castroCrea](https://github.com/castroCrea)
[![Avatar for samheutmaker](https://avatars.githubusercontent.com/u/1767032?u=a50f2b3b339eb965b9c812977aa10d64202e2e95&v=4)](https://github.com/samheutmaker)[@samheutmaker](https://github.com/samheutmaker)
[![Avatar for archie-swif](https://avatars.githubusercontent.com/u/2158707?u=8a0aeee45e93ba575321804a7b709bf8897941de&v=4)](https://github.com/archie-swif)[@archie-swif](https://github.com/archie-swif)
[![Avatar for fahreddinozcan](https://avatars.githubusercontent.com/u/88107904?v=4)](https://github.com/fahreddinozcan)[@fahreddinozcan](https://github.com/fahreddinozcan)
[![Avatar for valdo99](https://avatars.githubusercontent.com/u/41517614?u=ba37c9a21db3068953ae50d90c1cd07c3dec3abd&v=4)](https://github.com/valdo99)[@valdo99](https://github.com/valdo99)
[![Avatar for gmpetrov](https://avatars.githubusercontent.com/u/4693180?u=8cf781d9099d6e2f2d2caf7612a5c2811ba13ef8&v=4)](https://github.com/gmpetrov)[@gmpetrov](https://github.com/gmpetrov)
[![Avatar for mattzcarey](https://avatars.githubusercontent.com/u/77928207?u=fc8febe2a4b67384046eb4041b325bb34665d59c&v=4)](https://github.com/mattzcarey)[@mattzcarey](https://github.com/mattzcarey)
[![Avatar for albertpurnama](https://avatars.githubusercontent.com/u/14824254?u=b3acdfc46d3d26d44f66a7312b102172c7ff9722&v=4)](https://github.com/albertpurnama)[@albertpurnama](https://github.com/albertpurnama)
[![Avatar for yroc92](https://avatars.githubusercontent.com/u/17517541?u=7405432fa828c094e130e8193be3cae04ac96d11&v=4)](https://github.com/yroc92)[@yroc92](https://github.com/yroc92)
[![Avatar for Basti-an](https://avatars.githubusercontent.com/u/42387209?u=43ac44545861ce4adec99f973aeea3e6cf9a1bc0&v=4)](https://github.com/Basti-an)[@Basti-an](https://github.com/Basti-an)
[![Avatar for CarlosZiegler](https://avatars.githubusercontent.com/u/38855507?u=65c19ae772581fb7367f646ed90be44311e60e70&v=4)](https://github.com/CarlosZiegler)[@CarlosZiegler](https://github.com/CarlosZiegler)
[![Avatar for iloveitaly](https://avatars.githubusercontent.com/u/150855?v=4)](https://github.com/iloveitaly)[@iloveitaly](https://github.com/iloveitaly)
[![Avatar for dilling](https://avatars.githubusercontent.com/u/5846912?v=4)](https://github.com/dilling)[@dilling](https://github.com/dilling)
[![Avatar for anselm94](https://avatars.githubusercontent.com/u/9033201?u=e5f657c3a1657c089d7cb88121e544ae7212e6f1&v=4)](https://github.com/anselm94)[@anselm94](https://github.com/anselm94)
[![Avatar for sarangan12](https://avatars.githubusercontent.com/u/602456?u=d39962c60b0ac5fea4e97cb67433a42c736c3c5b&v=4)](https://github.com/sarangan12)[@sarangan12](https://github.com/sarangan12)
[![Avatar for gramliu](https://avatars.githubusercontent.com/u/24856195?u=9f55337506cdcac3146772c56b4634e6b46a5e46&v=4)](https://github.com/gramliu)[@gramliu](https://github.com/gramliu)
[![Avatar for jeffchuber](https://avatars.githubusercontent.com/u/891664?u=722172a0061f68ab22819fa88a354ec973f70a63&v=4)](https://github.com/jeffchuber)[@jeffchuber](https://github.com/jeffchuber)
[![Avatar for ywkim](https://avatars.githubusercontent.com/u/588581?u=df702e5b817a56476cb0cd8e7587b9be844d2850&v=4)](https://github.com/ywkim)[@ywkim](https://github.com/ywkim)
[![Avatar for jirimoravcik](https://avatars.githubusercontent.com/u/951187?u=e80c215810058f57145042d12360d463e3a53443&v=4)](https://github.com/jirimoravcik)[@jirimoravcik](https://github.com/jirimoravcik)
[![Avatar for janvi-kalra](https://avatars.githubusercontent.com/u/119091286?u=ed9e9d72bbf9964b80f81e5ba8d1d5b2f860c23f&v=4)](https://github.com/janvi-kalra)[@janvi-kalra](https://github.com/janvi-kalra)
[![Avatar for Anush008](https://avatars.githubusercontent.com/u/46051506?u=026f5f140e8b7ba4744bf971f9ebdea9ebab67ca&v=4)](https://github.com/Anush008)[@Anush008](https://github.com/Anush008)
[![Avatar for yuku](https://avatars.githubusercontent.com/u/96157?v=4)](https://github.com/yuku)[@yuku](https://github.com/yuku)
[![Avatar for conroywhitney](https://avatars.githubusercontent.com/u/249891?u=36703ce68261be59109622877012be08fbc090da&v=4)](https://github.com/conroywhitney)[@conroywhitney](https://github.com/conroywhitney)
[![Avatar for Czechh](https://avatars.githubusercontent.com/u/4779936?u=ab072503433effc18c071b31adda307988877d5e&v=4)](https://github.com/Czechh)[@Czechh](https://github.com/Czechh)
[![Avatar for adam101](https://avatars.githubusercontent.com/u/1535782?v=4)](https://github.com/adam101)[@adam101](https://github.com/adam101)
[![Avatar for jaclar](https://avatars.githubusercontent.com/u/362704?u=52d868cc75c793fa895ef7035ae45516bd915e84&v=4)](https://github.com/jaclar)[@jaclar](https://github.com/jaclar)
[![Avatar for ivoneijr](https://avatars.githubusercontent.com/u/6401435?u=96c11b6333636bd784ffbff72998591f3b3f087b&v=4)](https://github.com/ivoneijr)[@ivoneijr](https://github.com/ivoneijr)
[![Avatar for tonisives](https://avatars.githubusercontent.com/u/1083534?v=4)](https://github.com/tonisives)[@tonisives](https://github.com/tonisives)
[![Avatar for Njuelle](https://avatars.githubusercontent.com/u/3192870?u=e126aae39f36565450ebc854b35c6e890b705e71&v=4)](https://github.com/Njuelle)[@Njuelle](https://github.com/Njuelle)
[![Avatar for Roland0511](https://avatars.githubusercontent.com/u/588050?u=3c91917389117ee84843d961252ab7a2b9097e0e&v=4)](https://github.com/Roland0511)[@Roland0511](https://github.com/Roland0511)
[![Avatar for SebastjanPrachovskij](https://avatars.githubusercontent.com/u/86522260?u=66898c89771c7b8ff38958e9fb9563a1cf7f8004&v=4)](https://github.com/SebastjanPrachovskij)[@SebastjanPrachovskij](https://github.com/SebastjanPrachovskij)
[![Avatar for cinqisap](https://avatars.githubusercontent.com/u/158295355?v=4)](https://github.com/cinqisap)[@cinqisap](https://github.com/cinqisap)
[![Avatar for dylanintech](https://avatars.githubusercontent.com/u/86082012?u=6516bbf39c5af198123d8ed2e35fff5d200f4d2e&v=4)](https://github.com/dylanintech)[@dylanintech](https://github.com/dylanintech)
[![Avatar for andrewnguonly](https://avatars.githubusercontent.com/u/7654246?u=b8599019655adaada3cdc3c3006798df42c44494&v=4)](https://github.com/andrewnguonly)[@andrewnguonly](https://github.com/andrewnguonly)
[![Avatar for ShaunBaker](https://avatars.githubusercontent.com/u/1176557?u=c2e8ecfb45b736fc4d3bbfe182e26936bd519fd3&v=4)](https://github.com/ShaunBaker)[@ShaunBaker](https://github.com/ShaunBaker)
[![Avatar for machulav](https://avatars.githubusercontent.com/u/2857712?u=6809bef8bf07c46b39cd2fcd6027ed86e76372cd&v=4)](https://github.com/machulav)[@machulav](https://github.com/machulav)
[![Avatar for dersia](https://avatars.githubusercontent.com/u/1537958?u=5da46ca1cd93c6fed927c612fc454ba51d0a36b1&v=4)](https://github.com/dersia)[@dersia](https://github.com/dersia)
[![Avatar for joshsny](https://avatars.githubusercontent.com/u/7135900?u=109e43c5e906a8ecc1a2d465c4457f5cf29328a5&v=4)](https://github.com/joshsny)[@joshsny](https://github.com/joshsny)
[![Avatar for jl4nz](https://avatars.githubusercontent.com/u/94814971?u=266358610eeb54c3393dc127718dd6a997fdbf52&v=4)](https://github.com/jl4nz)[@jl4nz](https://github.com/jl4nz)
[![Avatar for eactisgrosso](https://avatars.githubusercontent.com/u/2279003?u=d122874eedb211359d4bf0119877d74ea7d5bcab&v=4)](https://github.com/eactisgrosso)[@eactisgrosso](https://github.com/eactisgrosso)
[![Avatar for frankolson](https://avatars.githubusercontent.com/u/6773706?u=738775762205a07fd7de297297c99f781e957c58&v=4)](https://github.com/frankolson)[@frankolson](https://github.com/frankolson)
[![Avatar for uthmanmoh](https://avatars.githubusercontent.com/u/83053931?u=5c715d2d4f6786fa749276de8eced710be8bfa99&v=4)](https://github.com/uthmanmoh)[@uthmanmoh](https://github.com/uthmanmoh)
[![Avatar for Jordan-Gilliam](https://avatars.githubusercontent.com/u/25993686?u=319a6ed2119197d4d11301614a104ae686f9fc70&v=4)](https://github.com/Jordan-Gilliam)[@Jordan-Gilliam](https://github.com/Jordan-Gilliam)
[![Avatar for winor30](https://avatars.githubusercontent.com/u/12413150?u=691a5e076bdd8c9e9fd637a41496b29e11b0c82f&v=4)](https://github.com/winor30)[@winor30](https://github.com/winor30)
[![Avatar for willemmulder](https://avatars.githubusercontent.com/u/70933?u=206fafc72fd14b4291cb29269c5e1cc8081d043b&v=4)](https://github.com/willemmulder)[@willemmulder](https://github.com/willemmulder)
[![Avatar for aixgeek](https://avatars.githubusercontent.com/u/9697715?u=d139c5568375c2472ac6142325e6856cd766d88d&v=4)](https://github.com/aixgeek)[@aixgeek](https://github.com/aixgeek)
[![Avatar for seuha516](https://avatars.githubusercontent.com/u/79067549?u=de7a2688cb44010afafd055d707f3463585494df&v=4)](https://github.com/seuha516)[@seuha516](https://github.com/seuha516)
[![Avatar for mhart](https://avatars.githubusercontent.com/u/367936?v=4)](https://github.com/mhart)[@mhart](https://github.com/mhart)
[![Avatar for mvaker](https://avatars.githubusercontent.com/u/5671913?u=2e237cb1dd51f9d0dd01f0deb80003163641fc49&v=4)](https://github.com/mvaker)[@mvaker](https://github.com/mvaker)
[![Avatar for vitaly-ps](https://avatars.githubusercontent.com/u/141448200?u=a3902a9c11399c916f1af2bf0ead901e7afe1a67&v=4)](https://github.com/vitaly-ps)[@vitaly-ps](https://github.com/vitaly-ps)
[![Avatar for cbh123](https://avatars.githubusercontent.com/u/14149230?u=ca710ca2a64391470163ddef6b5ea7633ab26872&v=4)](https://github.com/cbh123)[@cbh123](https://github.com/cbh123)
[![Avatar for Neverland3124](https://avatars.githubusercontent.com/u/52025513?u=865e861a1abb0d78be587f685d28fe8a00aee8fe&v=4)](https://github.com/Neverland3124)[@Neverland3124](https://github.com/Neverland3124)
[![Avatar for jasonnathan](https://avatars.githubusercontent.com/u/780157?u=d5efec16b5e3a9913dc44967059a70d9a610755d&v=4)](https://github.com/jasonnathan)[@jasonnathan](https://github.com/jasonnathan)
[![Avatar for Maanethdesilva](https://avatars.githubusercontent.com/u/94875583?v=4)](https://github.com/Maanethdesilva)[@Maanethdesilva](https://github.com/Maanethdesilva)
[![Avatar for fuleinist](https://avatars.githubusercontent.com/u/1163738?v=4)](https://github.com/fuleinist)[@fuleinist](https://github.com/fuleinist)
[![Avatar for kwadhwa18](https://avatars.githubusercontent.com/u/6015244?u=a127081404b8dc16ac0e84a869dfff4ac82bbab2&v=4)](https://github.com/kwadhwa18)[@kwadhwa18](https://github.com/kwadhwa18)
[![Avatar for jeasonnow](https://avatars.githubusercontent.com/u/16950207?u=ab2d0d4f1574398ac842e6bb3c2ba020ab7711eb&v=4)](https://github.com/jeasonnow)[@jeasonnow](https://github.com/jeasonnow)
[![Avatar for sousousore1](https://avatars.githubusercontent.com/u/624438?v=4)](https://github.com/sousousore1)[@sousousore1](https://github.com/sousousore1)
[![Avatar for seth-25](https://avatars.githubusercontent.com/u/49222652?u=203c2bef6cbb77668a289b8272aea4fb654558d5&v=4)](https://github.com/seth-25)[@seth-25](https://github.com/seth-25)
[![Avatar for tomi-mercado](https://avatars.githubusercontent.com/u/60221771?u=f8c1214535e402b0ff5c3428bfe98b586b517106&v=4)](https://github.com/tomi-mercado)[@tomi-mercado](https://github.com/tomi-mercado)
[![Avatar for JHeidinga](https://avatars.githubusercontent.com/u/1702015?u=fa33fb709707e2429f10fbb824abead61628d50c&v=4)](https://github.com/JHeidinga)[@JHeidinga](https://github.com/JHeidinga)
[![Avatar for niklas-lohmann](https://avatars.githubusercontent.com/u/68230177?v=4)](https://github.com/niklas-lohmann)[@niklas-lohmann](https://github.com/niklas-lohmann)
[![Avatar for Durisvk](https://avatars.githubusercontent.com/u/8467003?u=f07b8c070eaed3ad8972be4f4ca91afb1ae6e2c0&v=4)](https://github.com/Durisvk)[@Durisvk](https://github.com/Durisvk)
[![Avatar for BjoernRave](https://avatars.githubusercontent.com/u/36173920?u=c3acae11221a037c16254e2187555ea6259d89c3&v=4)](https://github.com/BjoernRave)[@BjoernRave](https://github.com/BjoernRave)
[![Avatar for qalqi](https://avatars.githubusercontent.com/u/1781048?u=837879a7e62c6b3736dc39a31ff42873bee2c532&v=4)](https://github.com/qalqi)[@qalqi](https://github.com/qalqi)
[![Avatar for katarinasupe](https://avatars.githubusercontent.com/u/61758502?u=20cdcb0bae81b9eb330c94f7cfae462327785219&v=4)](https://github.com/katarinasupe)[@katarinasupe](https://github.com/katarinasupe)
[![Avatar for andrewlei](https://avatars.githubusercontent.com/u/1158058?v=4)](https://github.com/andrewlei)[@andrewlei](https://github.com/andrewlei)
[![Avatar for floomby](https://avatars.githubusercontent.com/u/3113021?v=4)](https://github.com/floomby)[@floomby](https://github.com/floomby)
[![Avatar for milanjrodd](https://avatars.githubusercontent.com/u/121220673?u=55636f26ea48e77e0372008089ff2c38691eaa0a&v=4)](https://github.com/milanjrodd)[@milanjrodd](https://github.com/milanjrodd)
[![Avatar for NickMandylas](https://avatars.githubusercontent.com/u/19514618?u=95f8c29ed06696260722c2c6aa7bac3a1136d7a2&v=4)](https://github.com/NickMandylas)[@NickMandylas](https://github.com/NickMandylas)
[![Avatar for DravenCat](https://avatars.githubusercontent.com/u/55412122?v=4)](https://github.com/DravenCat)[@DravenCat](https://github.com/DravenCat)
[![Avatar for Alireza29675](https://avatars.githubusercontent.com/u/2771377?u=65ec71f9860ac2610e1cb5028173f67713a174d7&v=4)](https://github.com/Alireza29675)[@Alireza29675](https://github.com/Alireza29675)
[![Avatar for zhengxs2018](https://avatars.githubusercontent.com/u/7506913?u=42c32ca59ae2e44532cd45027e5b62d2712cf2a2&v=4)](https://github.com/zhengxs2018)[@zhengxs2018](https://github.com/zhengxs2018)
[![Avatar for clemenspeters](https://avatars.githubusercontent.com/u/13015002?u=059c556d90a2e5639dee42123077d51223c190f0&v=4)](https://github.com/clemenspeters)[@clemenspeters](https://github.com/clemenspeters)
[![Avatar for cmtoomey](https://avatars.githubusercontent.com/u/12201602?u=ea5cbb8d158980f6050dd41ae41b7f72e0a47337&v=4)](https://github.com/cmtoomey)[@cmtoomey](https://github.com/cmtoomey)
[![Avatar for igorshapiro](https://avatars.githubusercontent.com/u/1085209?u=16b60724316a7ed8e8b52af576c121215461922a&v=4)](https://github.com/igorshapiro)[@igorshapiro](https://github.com/igorshapiro)
[![Avatar for ezynda3](https://avatars.githubusercontent.com/u/5308871?v=4)](https://github.com/ezynda3)[@ezynda3](https://github.com/ezynda3)
[![Avatar for more-by-more](https://avatars.githubusercontent.com/u/67614844?u=d3d818efb3e3e2ddda589d6157f853922a460f5b&v=4)](https://github.com/more-by-more)[@more-by-more](https://github.com/more-by-more)
[![Avatar for noble-varghese](https://avatars.githubusercontent.com/u/109506617?u=c1d2a1813c51bff89bfa85d533633ed4c201ba2e&v=4)](https://github.com/noble-varghese)[@noble-varghese](https://github.com/noble-varghese)
[![Avatar for SananR](https://avatars.githubusercontent.com/u/14956384?u=538ff9bf09497059b312067333f68eba75594802&v=4)](https://github.com/SananR)[@SananR](https://github.com/SananR)
[![Avatar for fraserxu](https://avatars.githubusercontent.com/u/1183541?v=4)](https://github.com/fraserxu)[@fraserxu](https://github.com/fraserxu)
[![Avatar for ashvardanian](https://avatars.githubusercontent.com/u/1983160?u=536f2558c6ac33b74a6d89520dcb27ba46954070&v=4)](https://github.com/ashvardanian)[@ashvardanian](https://github.com/ashvardanian)
[![Avatar for adeelehsan](https://avatars.githubusercontent.com/u/8156837?u=99cacfbd962ff58885bdf68e5fc640fc0d3cb87c&v=4)](https://github.com/adeelehsan)[@adeelehsan](https://github.com/adeelehsan)
[![Avatar for henriquegdantas](https://avatars.githubusercontent.com/u/12974790?u=80d76f256a7854da6ae441b6ee078119877398e7&v=4)](https://github.com/henriquegdantas)[@henriquegdantas](https://github.com/henriquegdantas)
[![Avatar for evad1n](https://avatars.githubusercontent.com/u/50718218?u=ee35784971ef8dcdfdb25cfe0a8284ca48724938&v=4)](https://github.com/evad1n)[@evad1n](https://github.com/evad1n)
[![Avatar for benjibc](https://avatars.githubusercontent.com/u/1585539?u=654a21985c875f78a20eda7e4884e8d64de86fba&v=4)](https://github.com/benjibc)[@benjibc](https://github.com/benjibc)
[![Avatar for P-E-B](https://avatars.githubusercontent.com/u/38215315?u=3985b6a3ecb0e8338c5912ea9e20787152d0ad7a&v=4)](https://github.com/P-E-B)[@P-E-B](https://github.com/P-E-B)
[![Avatar for omikader](https://avatars.githubusercontent.com/u/16735699?u=29fc7c7c777c3cabc22449b68bbb01fe2fa0b574&v=4)](https://github.com/omikader)[@omikader](https://github.com/omikader)
[![Avatar for jasongill](https://avatars.githubusercontent.com/u/241711?v=4)](https://github.com/jasongill)[@jasongill](https://github.com/jasongill)
[![Avatar for puigde](https://avatars.githubusercontent.com/u/83642160?u=7e76b13b7484e4601bea47dc6e238c89d453a24d&v=4)](https://github.com/puigde)[@puigde](https://github.com/puigde)
[![Avatar for chase-crumbaugh](https://avatars.githubusercontent.com/u/90289500?u=0129550ecfbb4a92922fff7a406566a47a23dfb0&v=4)](https://github.com/chase-crumbaugh)[@chase-crumbaugh](https://github.com/chase-crumbaugh)
[![Avatar for Zeneos](https://avatars.githubusercontent.com/u/95008961?v=4)](https://github.com/Zeneos)[@Zeneos](https://github.com/Zeneos)
[![Avatar for joseanu](https://avatars.githubusercontent.com/u/2730127?u=9fe1d593bd63c7f116b9c46e9cbd359a2e4304f0&v=4)](https://github.com/joseanu)[@joseanu](https://github.com/joseanu)
[![Avatar for JackFener](https://avatars.githubusercontent.com/u/20380671?u=b51d10b71850203e6360655fa59cc679c5a498e6&v=4)](https://github.com/JackFener)[@JackFener](https://github.com/JackFener)
[![Avatar for swyxio](https://avatars.githubusercontent.com/u/6764957?u=97ad815028595b73b06ee4b0510e66bbe391228d&v=4)](https://github.com/swyxio)[@swyxio](https://github.com/swyxio)
[![Avatar for pczekaj](https://avatars.githubusercontent.com/u/1460539?u=24c2db4a29757f608a54a062340a466cad843825&v=4)](https://github.com/pczekaj)[@pczekaj](https://github.com/pczekaj)
[![Avatar for devinburnette](https://avatars.githubusercontent.com/u/13012689?u=7b68c67ea1bbc272c35be7c0bcf1c66a04554179&v=4)](https://github.com/devinburnette)[@devinburnette](https://github.com/devinburnette)
[![Avatar for ananis25](https://avatars.githubusercontent.com/u/16446513?u=5026326ed39bfee8325c30cdbd24ac20519d21b8&v=4)](https://github.com/ananis25)[@ananis25](https://github.com/ananis25)
[![Avatar for joaopcm](https://avatars.githubusercontent.com/u/58827242?u=3e03812a1074f2ce888b751c48e78a849c7e0aff&v=4)](https://github.com/joaopcm)[@joaopcm](https://github.com/joaopcm)
[![Avatar for SalehHindi](https://avatars.githubusercontent.com/u/15721377?u=37fadd6a7bf9dfa63ceb866bda23ca44a7b2c0c2&v=4)](https://github.com/SalehHindi)[@SalehHindi](https://github.com/SalehHindi)
[![Avatar for cmanou](https://avatars.githubusercontent.com/u/683160?u=e9050e4341c2c9d46b035ea17ea94234634e1b2c&v=4)](https://github.com/cmanou)[@cmanou](https://github.com/cmanou)
[![Avatar for micahriggan](https://avatars.githubusercontent.com/u/3626473?u=508e8c831d8eb804e95985d5191a08c761544fad&v=4)](https://github.com/micahriggan)[@micahriggan](https://github.com/micahriggan)
[![Avatar for w00ing](https://avatars.githubusercontent.com/u/29723695?u=7673821119377d98bba457451719483302147cfa&v=4)](https://github.com/w00ing)[@w00ing](https://github.com/w00ing)
[![Avatar for ardsh](https://avatars.githubusercontent.com/u/23664687?u=158ef7e156a7881b8647ece63683aca2c28f132e&v=4)](https://github.com/ardsh)[@ardsh](https://github.com/ardsh)
[![Avatar for JoeABCDEF](https://avatars.githubusercontent.com/u/39638510?u=f5fac0a3578572817b37a6dfc00adacb705ec7d0&v=4)](https://github.com/JoeABCDEF)[@JoeABCDEF](https://github.com/JoeABCDEF)
[![Avatar for saul-jb](https://avatars.githubusercontent.com/u/2025187?v=4)](https://github.com/saul-jb)[@saul-jb](https://github.com/saul-jb)
[![Avatar for JTCorrin](https://avatars.githubusercontent.com/u/73115680?v=4)](https://github.com/JTCorrin)[@JTCorrin](https://github.com/JTCorrin)
[![Avatar for zandko](https://avatars.githubusercontent.com/u/37948383?u=04ccf6e060b27e39c931c2608381351cf236a28f&v=4)](https://github.com/zandko)[@zandko](https://github.com/zandko)
[![Avatar for federicoestevez](https://avatars.githubusercontent.com/u/10424147?v=4)](https://github.com/federicoestevez)[@federicoestevez](https://github.com/federicoestevez)
[![Avatar for martinseanhunt](https://avatars.githubusercontent.com/u/65744?u=ddac1e773828d8058a40bca680cf549e955f69ae&v=4)](https://github.com/martinseanhunt)[@martinseanhunt](https://github.com/martinseanhunt)
[![Avatar for functorism](https://avatars.githubusercontent.com/u/17207277?u=4df9bc30a55b4da4b3d6fd20a2956afd722bde24&v=4)](https://github.com/functorism)[@functorism](https://github.com/functorism)
[![Avatar for erictt](https://avatars.githubusercontent.com/u/9592198?u=567fa49c73e824525d33eefd836ece16ab9964c8&v=4)](https://github.com/erictt)[@erictt](https://github.com/erictt)
[![Avatar for lesters](https://avatars.githubusercontent.com/u/5798036?u=4eba31d63c3818d17fb8f9aa923599ac63ebfea8&v=4)](https://github.com/lesters)[@lesters](https://github.com/lesters)
[![Avatar for my8bit](https://avatars.githubusercontent.com/u/782268?u=d83da3e6269d53a828bbeb6d661049a1ed185cb0&v=4)](https://github.com/my8bit)[@my8bit](https://github.com/my8bit)
[![Avatar for erhant](https://avatars.githubusercontent.com/u/16037166?u=9d056a2f5059684620e22aa4d880e38183309b51&v=4)](https://github.com/erhant)[@erhant](https://github.com/erhant)
We're so thankful for your support!
And one more thank you to [@tiangolo](https://github.com/tiangolo) for inspiration via FastAPI's [excellent people page](https://fastapi.tiangolo.com/fastapi-people).
* * *
#### Help us out by providing feedback on this documentation page:
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/community/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
Community navigator
===================
Hi! Thanks for being here. We’re lucky to have a community of so many passionate developers building with LangChain–we have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each other’s work, become each other's customers and collaborators, and so much more.
Whether you’re new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction.
* **🦜 Contribute to LangChain**
* **🌍 Meetups, Events, and Hackathons**
* **📣 Help Us Amplify Your Work**
* **💬 Stay in the loop**
🦜 Contribute to LangChain
==========================
LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is **still** so much to do together. Here are some ways to get involved:
* **[Open a pull request](https://github.com/langchain-ai/langchainjs/issues):** we’d appreciate all forms of contributions–new features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, we’d love to work on it with you.
* **[Read our contributor guidelines:](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
* **Become an expert:** our experts help the community by answering product questions in Discord. If that’s a role you’d like to play, we’d be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at [hello@langchain.dev](mailto:hello@langchain.dev) and we’ll take it from there!
* **Integrate with LangChain:** if your product integrates with LangChain–or aspires to–we want to help make sure the experience is as smooth as possible for you and end users. Send us an email at [hello@langchain.dev](mailto:hello@langchain.dev) and tell us what you’re working on.
* **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at [hello@langchain.dev](mailto:hello@langchain.dev) if you’d like to explore this role.
🌍 Meetups, Events, and Hackathons
==================================
One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible!
* **Find a meetup, hackathon, or webinar:** you can find the one for you on on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f).
* **Submit an event to our calendar:** email us at [events@langchain.dev](mailto:events@langchain.dev) with a link to your event page! We can also help you spread the word with our local communities.
* **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at [events@langchain.dev](mailto:events@langchain.dev) to tell us about your event!
* **Become a meetup sponsor:** we often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If you’d like to help, send us an email to [events@langchain.dev](mailto:events@langchain.dev) we can share more about how it works!
* **Speak at an event:** meetup hosts are always looking for great speakers, presenters, and panelists. If you’d like to do that at an event, send us an email to [hello@langchain.dev](mailto:hello@langchain.dev) with more information about yourself, what you want to talk about, and what city you’re based in and we’ll try to match you with an upcoming event!
* **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at [hello@langchain.dev](mailto:hello@langchain.dev) and let us know how we can help.
📣 Help Us Amplify Your Work
============================
If you’re working on something you’re proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off.
* **Post about your work and mention us:** we love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), we’ll almost certainly see it and can show you some love.
* **Publish something on our blog:** if you’re writing about your experience building with LangChain, we’d love to post (or crosspost) it on our blog! E-mail [hello@langchain.dev](mailto:hello@langchain.dev) with a draft of your post! Or even an idea for something you want to write about.
* **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at [hello@langchain.dev](mailto:hello@langchain.dev).
☀️ Stay in the loop
===================
Here’s where our team hangs out, talks shop, spotlights cool work, and shares what we’re up to. We’d love to see you there too.
* **[Twitter](https://twitter.com/LangChainAI):** we post about what we’re working on and what cool things we’re seeing in the space. If you tag @langchainai in your post, we’ll almost certainly see it, and can snow you some love!
* **[Discord](https://discord.gg/6adMQxSpJS):** connect with with >30k developers who are building with LangChain
* **[GitHub](https://github.com/langchain-ai/langchainjs):** open pull requests, contribute to a discussion, and/or contribute
* **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
* * *
#### Help us out by providing feedback on this documentation page:
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/additional_resources/tutorials/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
On this page
Tutorials
=========
Below are links to tutorials and courses on LangChain.js. For written guides on common use cases for LangChain.js, check out the [use cases](/v0.1/docs/use_cases/) and [guides](/v0.1/docs/guides/) sections.
* * *
Deeplearning.ai[](#deeplearningai "Direct link to Deeplearning.ai")
--------------------------------------------------------------------
We've partnered with [Deeplearning.ai](https://deeplearning.ai) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng) on a LangChain.js short course.
It covers LCEL and other building blocks you can combine to build more complex chains, as well as fundamentals around loading data for retrieval augmented generation (RAG). Try it for free below:
* [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js)
Scrimba interactive guides[](#scrimba-interactive-guides "Direct link to Scrimba interactive guides")
------------------------------------------------------------------------------------------------------
[Scrimba](https://scrimba.com) is a code-learning platform that allows you to interactively edit and run code while watching a video walkthrough.
We've partnered with Scrimba on course materials (called "scrims") that teach the fundamentals of building with LangChain.js - check them out below, and check back for more as they become available!
### Learn LangChain.js[](#learn-langchainjs "Direct link to Learn LangChain.js")
* [Learn LangChain.js on Scrimba](https://scrimba.com/learn/langchain)
An full end-to-end course that walks through how to build a chatbot that can answer questions about a provided document. A great introduction to LangChain and a great first project for learning how to use LangChain Expression Language primitives to perform retrieval!
### LangChain Expression Language (LCEL)[](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
* [The basics (PromptTemplate + LLM)](https://scrimba.com/scrim/c6rD6Nt9)
* [Adding an output parser](https://scrimba.com/scrim/co6ae44248eacc1abd87ae3dc)
* [Attaching function calls to a model](https://scrimba.com/scrim/cof5449f5bc972f8c90be6a82)
* [Composing multiple chains](https://scrimba.com/scrim/co14344c29595bfb29c41f12a)
* [Retrieval chains](https://scrimba.com/scrim/co0e040d09941b4000244db46)
* [Conversational retrieval chains ("Chat with Docs")](https://scrimba.com/scrim/co3ed4a9eb4c6c6d0361a507c)
### Deeper dives[](#deeper-dives "Direct link to Deeper dives")
* [Setting up a new `PromptTemplate`](https://scrimba.com/scrim/cbGwRwuV)
* [Setting up `ChatOpenAI` parameters](https://scrimba.com/scrim/cEgbBBUw)
* [Attaching stop sequences](https://scrimba.com/scrim/co9704e389428fe2193eb955c)
Neo4j GraphAcademy[](#neo4j-graphacademy "Direct link to Neo4j GraphAcademy")
------------------------------------------------------------------------------
[Neo4j](https://neo4j.com) has put together a hands-on, practical course that shows how to build a movie-recommending chatbot in Next.js. It covers retrieval-augmented generation (RAG), tracking history, and more. Check it out below:
* [Build a Neo4j-backed Chatbot with TypeScript](https://graphacademy.neo4j.com/courses/llm-chatbot-typescript/?ref=langchainjs)
* * *
* * *
#### Help us out by providing feedback on this documentation page:
* [Deeplearning.ai](#deeplearningai)
* [Scrimba interactive guides](#scrimba-interactive-guides)
* [Learn LangChain.js](#learn-langchainjs)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Deeper dives](#deeper-dives)
* [Neo4j GraphAcademy](#neo4j-graphacademy)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/contributing/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
Developer Guide
===============
Contributing to LangChain
=========================
👋 Hi there! Thank you for being interested in contributing to LangChain. As an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
To contribute to this project, please follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow. Please do not try to push directly to this repo unless you are a maintainer.
Quick Links[](#quick-links "Direct link to Quick Links")
---------------------------------------------------------
### Not sure what to work on?[](#not-sure-what-to-work-on "Direct link to Not sure what to work on?")
If you are not sure what to work on, we have a few suggestions:
* Look at the issues with the [help wanted](https://github.com/langchain-ai/langchainjs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) label. These are issues that we think are good targets for contributors. If you are interested in working on one of these, please comment on the issue so that we can assign it to you. And if you have any questions let us know, we're happy to guide you!
* At the moment our main focus is reaching parity with the Python version for features and base functionality. If you are interested in working on a specific integration or feature, please let us know and we can help you get started.
### New abstractions[](#new-abstractions "Direct link to New abstractions")
We aim to keep the same core APIs between the Python and JS versions of LangChain, where possible. As such we ask that if you have an idea for a new abstraction, please open an issue first to discuss it. This will help us make sure that the API is consistent across both versions. If you're not sure what to work on, we recommend looking at the links above first.
### Want to add a specific integration?[](#want-to-add-a-specific-integration "Direct link to Want to add a specific integration?")
LangChain supports several different types of integrations with third-party providers and frameworks, including LLM providers (e.g. [OpenAI](https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/llms/openai.ts)), vector stores (e.g. [FAISS](https://github.com/ewfian/langchainjs/blob/main/langchain/src/vectorstores/faiss.ts)), document loaders (e.g. [Apify](https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/document_loaders/web/apify_dataset.ts)) persistent message history stores (e.g. [Redis](https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/stores/message/redis.ts)), and more.
We welcome such contributions, but ask that you read our dedicated [integration contribution guide](https://github.com/langchain-ai/langchainjs/blob/main/.github/contributing/INTEGRATIONS.md) for specific details and patterns to consider before opening a pull request.
You can also check out the [guide on extending LangChain.js](https://js.langchain.com/docs/guides/extending_langchain/) in our docs.
#### Integration packages[](#integration-packages "Direct link to Integration packages")
Integrations should generally reside in the `libs/langchain-community` workspace and be imported as `@langchain/community/module/name`. More in-depth integrations or suites of integrations may also reside in separate packages that depend on and extend `@langchain/core`. See [`@langchain/google-genai`](https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-google-genai) for an example.
To make creating packages like this easier, we offer the [`create-langchain-integration`](https://github.com/langchain-ai/langchainjs/blob/main/libs/create-langchain-integration/) utility that will automatically scaffold a repo with support for both ESM + CJS entrypoints. You can run it like this:
$ npx create-langchain-integration
### Want to add a feature that's already in Python?[](#want-to-add-a-feature-thats-already-in-python "Direct link to Want to add a feature that's already in Python?")
If you're interested in contributing a feature that's already in the [LangChain Python repo](https://github.com/langchain-ai/langchain) and you'd like some help getting started, you can try pasting code snippets and classes into the [LangChain Python to JS translator](https://langchain-translator.vercel.app/).
It's a chat interface wrapping a fine-tuned `gpt-3.5-turbo` instance trained on prior ported features. This allows the model to innately take into account LangChain-specific code style and imports.
It's an ongoing project, and feedback on runs will be used to improve the [LangSmith dataset](https://smith.langchain.com) for further fine-tuning! Try it out below:
[https://langchain-translator.vercel.app/](https://langchain-translator.vercel.app/)
🗺️ Contributing Guidelines[](#️-contributing-guidelines "Direct link to 🗺️ Contributing Guidelines")
-------------------------------------------------------------------------------------------------------
### 🚩 GitHub Issues[](#-github-issues "Direct link to 🚩 GitHub Issues")
Our [issues](https://github.com/langchain-ai/langchainjs/issues) page contains with bugs, improvements, and feature requests.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single modular bug/improvement/feature. If the two issues are related, or blocking, please link them rather than keep them as one single one.
We will try to keep these issues as up to date as possible, though with the rapid rate of development in this field some may get out of date. If you notice this happening, please just let us know.
### 🙋 Getting Help[](#-getting-help "Direct link to 🙋 Getting Help")
Although we try to have a developer setup to make it as easy as possible for others to contribute (see below) it is possible that some pain point may arise around environment setup, linting, documentation, or other. Should that occur, please contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase. If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help - we do not want these to get in the way of getting good code into the codebase.
### 🏭 Release process[](#-release-process "Direct link to 🏭 Release process")
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a developer and published to [npm](https://www.npmjs.com/package/langchain).
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software, even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)! If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
#### Integration releases[](#integration-releases "Direct link to Integration releases")
The release script can be executed only while on a fresh `main` branch, with no un-committed changes, from the package root. If working from a fork of the repository, make sure to sync the forked `main` branch with the upstream `main` branch first.
You can invoke the script by calling `yarn release`. If new dependencies have been added to the integration package, install them first (i.e. run `yarn`, then `yarn release`).
There are three parameters which can be passed to this script, one required and two optional.
* **Required**: `<workspace name>`. eg: `@langchain/core` The name of the package to release. Can be found in the `name` value of the package's `package.json`
* **Optional**: `--bump-deps` eg `--bump-deps` Will find all packages in the repo which depend on this workspace and checkout a new branch, update the dep version, run yarn install, commit & push to new branch. Generally, this is not necessary.
* **Optional**: `--tag <tag>` eg `--tag beta` Add a tag to the NPM release. Useful if you want to push a release candidate.
This script automatically bumps the package version, creates a new release branch with the changes, pushes the branch to GitHub, uses `release-it` to automatically release to NPM, and more depending on the flags passed.
Halfway through this script, you'll be prompted to enter an NPM OTP (typically from an authenticator app). This value is not stored anywhere and is only used to authenticate the NPM release.
> **Note** Unless releasing `langchain`, `no` should be answered to all prompts following `Publish @langchain/<package> to npm?`. Then, the change should be manually committed with the following commit message: `<package>[patch]: Release <new version>`. E.g.: `groq[patch]: Release 0.0.1`.
Docker must be running if releasing one of `langchain`, `@langchain/core` or `@langchain/community`. These packages run LangChain's export tests, which run inside docker containers.
Full example: `yarn release @langchain/core`.
### 🛠️ Tooling[](#️-tooling "Direct link to 🛠️ Tooling")
This project uses the following tools, which are worth getting familiar with if you plan to contribute:
* **[yarn](https://yarnpkg.com/) (v3.4.1)** - dependency management
* **[eslint](https://eslint.org/)** - enforcing standard lint rules
* **[prettier](https://prettier.io/)** - enforcing standard code formatting
* **[jest](https://jestjs.io/)** - testing code
* **[TypeDoc](https://typedoc.org/)** - reference doc generation from comments
* **[Docusaurus](https://docusaurus.io/)** - static site generation for documentation
🚀 Quick Start[](#-quick-start "Direct link to 🚀 Quick Start")
----------------------------------------------------------------
Clone this repo, then cd into it:
cd langchainjs
Next, try running the following common tasks:
✅ Common Tasks[](#-common-tasks "Direct link to ✅ Common Tasks")
-----------------------------------------------------------------
Our goal is to make it as easy as possible for you to contribute to this project. All of the below commands should be run from within a workspace directory (e.g. `langchain`, `libs/langchain-community`) unless otherwise noted.
cd langchain
Or, if you are working on a community integration:
cd libs/langchain-community
### Setup[](#setup "Direct link to Setup")
**Prerequisite**: Node version 18+ is required. Please check node version `node -v` and update it if required.
To get started, you will need to install the dependencies for the project. To do so, run:
yarn
Then, you will need to switch directories into `langchain-core` and build core by running:
cd ../langchain-coreyarnyarn build
### Linting[](#linting "Direct link to Linting")
We use [eslint](https://eslint.org/) to enforce standard lint rules. To run the linter, run:
yarn lint
### Formatting[](#formatting "Direct link to Formatting")
We use [prettier](https://prettier.io) to enforce code formatting style. To run the formatter, run:
yarn format
To just check for formatting differences, without fixing them, run:
yarn format:check
### Testing[](#testing "Direct link to Testing")
In general, tests should be added within a `tests/` folder alongside the modules they are testing.
**Unit tests** cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test. Unit tests should be called `*.test.ts`.
To run only unit tests, run:
yarn test
#### Running a single test[](#running-a-single-test "Direct link to Running a single test")
To run a single test, run the following from within a workspace:
yarn test:single /path/to/yourtest.test.ts
This is useful for developing individual features.
**Integration tests** cover logic that requires making calls to outside APIs (often integration with other services).
If you add support for a new external API, please add a new integration test. Integration tests should be called `*.int.test.ts`.
Note that most integration tests require credentials or other setup. You will likely need to set up a `langchain/.env` or `libs/langchain-community/.env` file like the example [here](https://github.com/langchain-ai/langchainjs/blob/main/langchain/.env.example).
We generally recommend only running integration tests with `yarn test:single`, but if you want to run all integration tests, run:
yarn test:integration
### Building[](#building "Direct link to Building")
To build the project, run:
yarn build
### Adding an Entrypoint[](#adding-an-entrypoint "Direct link to Adding an Entrypoint")
LangChain exposes multiple subpaths the user can import from, e.g.
import { OpenAI } from "langchain/llms/openai";
We call these subpaths "entrypoints". In general, you should create a new entrypoint if you are adding a new integration with a 3rd party library. If you're adding self-contained functionality without any external dependencies, you can add it to an existing entrypoint.
In order to declare a new entrypoint that users can import from, you should edit the `langchain/langchain.config.js` or `libs/langchain-community/langchain.config.js` file. To add an entrypoint `tools` that imports from `tools/index.ts` you'd add the following to the `entrypoints` key inside the `config` variable:
// ...entrypoints: { // ... tools: "tools/index",},// ...
If you're adding a new integration which requires installing a third party dependency, you must add the entrypoint to the `requiresOptionalDependency` array, also located inside `langchain/langchain.config.js` or `libs/langchain-community/langchain.config.js`.
// ...requiresOptionalDependency: [ // ... "tools/index",],// ...
This will make sure the entrypoint is included in the published package, and in generated documentation.
Documentation[](#documentation "Direct link to Documentation")
---------------------------------------------------------------
### Contribute Documentation[](#contribute-documentation "Direct link to Contribute Documentation")
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
##### Note: you only need to follow these steps if you are building the docs site locally.[](#note-you-only-need-to-follow-these-steps-if-you-are-building-the-docs-site-locally "Direct link to Note: you only need to follow these steps if you are building the docs site locally.")
1. [Quarto](https://quarto.org/) - package that converts Jupyter notebooks (`.ipynb` files) into `.mdx` files for serving in Docusaurus.
2. `yarn build --filter=core_docs` - It's as simple as that! (or you can simply run `yarn build` from `docs/core_docs/`)
All notebooks are converted to `.md` files and automatically gitignored. If you would like to create a non notebook doc, it must be a `.mdx` file.
### Writing Notebooks[](#writing-notebooks "Direct link to Writing Notebooks")
When adding new dependencies inside the notebook you must update the import map inside `deno.json` in the root of the LangChain repo.
This is required because the notebooks use the Deno runtime, and Deno formats imports differently than Node.js.
Example:
// Import in Node:import { z } from "zod";// Import in Deno:import { z } from "npm:/zod";
See examples inside `deno.json` for more details.
Docs are largely autogenerated by [TypeDoc](https://typedoc.org/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
Documentation and the skeleton lives under the `docs/` folder. Example code is imported from under the `examples/` folder.
### Running examples[](#running-examples "Direct link to Running examples")
If you add a new major piece of functionality, it is helpful to add an example to showcase how to use it. Most of our users find examples to be the most helpful kind of documentation.
Examples can be added in the `examples/src` directory, e.g. `examples/src/path/to/example`. This example can then be invoked with `yarn example path/to/example` at the top level of the repo.
To run examples that require an environment variable, you'll need to add a `.env` file under `examples/.env`.
### Build Documentation Locally[](#build-documentation-locally "Direct link to Build Documentation Locally")
To generate and view the documentation locally, change to the project root and run `yarn` to ensure dependencies get installed in both the `docs/` and `examples/` workspaces:
cd ..yarn
Then run:
yarn docs
Advanced[](#advanced "Direct link to Advanced")
------------------------------------------------
**Environment tests** test whether LangChain works across different JS environments, including Node.js (both ESM and CJS), Edge environments (eg. Cloudflare Workers), and browsers (using Webpack).
To run the environment tests with Docker, run the following command from the project root:
yarn test:exports:docker
* * *
#### Help us out by providing feedback on this documentation page:
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/get_started/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* Get started
Get started
===========
Get started with LangChain
[
📄️ Introduction
----------------
LangChain is a framework for developing applications powered by language models. It enables applications that:
](/v0.1/docs/get_started/introduction/)
[
📄️ Installation
----------------
Supported Environments
](/v0.1/docs/get_started/installation/)
[
📄️ Quickstart
--------------
In this quickstart we'll show you how to:
](/v0.1/docs/get_started/quickstart/)
[
Next
Introduction
](/v0.1/docs/get_started/introduction/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/get_started/installation/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Get started](/v0.1/docs/get_started/)
* Installation
On this page
Installation
============
Supported Environments[](#supported-environments "Direct link to Supported Environments")
------------------------------------------------------------------------------------------
LangChain is written in TypeScript and can be used in:
* Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x
* Cloudflare Workers
* Vercel / Next.js (Browser, Serverless and Edge functions)
* Supabase Edge Functions
* Browser
* Deno
* Bun
However, note that individual integrations may not be supported in all environments.
Installation[](#installation-1 "Direct link to Installation")
--------------------------------------------------------------
To get started, install LangChain with the following command:
* npm
* Yarn
* pnpm
npm install -S langchain
yarn add langchain
pnpm add langchain
### TypeScript[](#typescript "Direct link to TypeScript")
LangChain is written in TypeScript and provides type definitions for all of its public APIs.
Installing integration packages[](#installing-integration-packages "Direct link to Installing integration packages")
---------------------------------------------------------------------------------------------------------------------
LangChain supports packages that contain specific module integrations with third-party providers. They can be as specific as [`@langchain/google-genai`](/v0.1/docs/integrations/platforms/google/#chatgooglegenerativeai), which contains integrations just for Google AI Studio models, or as broad as [`@langchain/community`](https://www.npmjs.com/package/@langchain/community), which contains broader variety of community contributed integrations.
These packages, as well as the main LangChain package, all depend on [`@langchain/core`](https://www.npmjs.com/package/@langchain/core), which contains the base abstractions that these integration packages extend.
To ensure that all integrations and their types interact with each other properly, it is important that they all use the same version of `@langchain/core`. The best way to guarantee this is to add a `"resolutions"` or `"overrides"` field like the following in your project's `package.json`. The name will depend on your package manager:
tip
The `resolutions` or `pnpm.overrides` fields for `yarn` or `pnpm` must be set in the root `package.json` file.
If you are using `yarn`:
yarn package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/google-genai": "^0.0.2", "langchain": "0.0.207" }, "resolutions": { "@langchain/core": "0.1.5" }}
Or for `npm`:
npm package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/google-genai": "^0.0.2", "langchain": "0.0.207" }, "overrides": { "@langchain/core": "0.1.5" }}
Or for `pnpm`:
pnpm package.json
{ "name": "your-project", "version": "0.0.0", "private": true, "engines": { "node": ">=18" }, "dependencies": { "@langchain/google-genai": "^0.0.2", "langchain": "0.0.207" }, "pnpm": { "overrides": { "@langchain/core": "0.1.5" } }}
### @langchain/community[](#langchaincommunity "Direct link to @langchain/community")
The [@langchain/community](https://www.npmjs.com/package/@langchain/community) package contains third-party integrations. It is automatically installed along with `langchain`, but can also be used separately with just `@langchain/core`. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
### @langchain/core[](#langchaincore "Direct link to @langchain/core")
The [@langchain/core](https://www.npmjs.com/package/@langchain/core) package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed along with `langchain`, but can also be used separately. Install with:
* npm
* Yarn
* pnpm
npm install @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
Loading the library[](#loading-the-library "Direct link to Loading the library")
---------------------------------------------------------------------------------
### ESM[](#esm "Direct link to ESM")
LangChain provides an ESM build targeting Node.js environments. You can import it using the following syntax:
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
If you are using TypeScript in an ESM project we suggest updating your `tsconfig.json` to include the following:
tsconfig.json
{ "compilerOptions": { ... "target": "ES2020", // or higher "module": "nodenext", }}
### CommonJS[](#commonjs "Direct link to CommonJS")
LangChain provides a CommonJS build targeting Node.js environments. You can import it using the following syntax:
const { OpenAI } = require("@langchain/openai");
### Cloudflare Workers[](#cloudflare-workers "Direct link to Cloudflare Workers")
LangChain can be used in Cloudflare Workers. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
### Vercel / Next.js[](#vercel--nextjs "Direct link to Vercel / Next.js")
LangChain can be used in Vercel / Next.js. We support using LangChain in frontend components, in Serverless functions and in Edge functions. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
### Deno / Supabase Edge Functions[](#deno--supabase-edge-functions "Direct link to Deno / Supabase Edge Functions")
LangChain can be used in Deno / Supabase Edge Functions. You can import it using the following syntax:
import { OpenAI } from "https://esm.sh/@langchain/openai";
or
import { OpenAI } from "npm:@langchain/openai";
We recommend looking at our [Supabase Template](https://github.com/langchain-ai/langchain-template-supabase) for an example of how to use LangChain in Supabase Edge Functions.
### Browser[](#browser "Direct link to Browser")
LangChain can be used in the browser. In our CI we test bundling LangChain with Webpack and Vite, but other bundlers should work too. You can import it using the following syntax:
import { OpenAI } from "@langchain/openai";
Unsupported: Node.js 16[](#unsupported-nodejs-16 "Direct link to Unsupported: Node.js 16")
-------------------------------------------------------------------------------------------
We do not support Node.js 16, but if you still want to run LangChain on Node.js 16, you will need to follow the instructions in this section. We do not guarantee that these instructions will continue to work in the future.
You will have to make `fetch` available globally, either:
* run your application with `NODE_OPTIONS='--experimental-fetch' node ...`, or
* install `node-fetch` and follow the instructions [here](https://github.com/node-fetch/node-fetch#providing-global-access)
You'll also need to [polyfill `ReadableStream`](https://www.npmjs.com/package/web-streams-polyfill) by installing:
* npm
* Yarn
* pnpm
npm i web-streams-polyfill
yarn add web-streams-polyfill
pnpm add web-streams-polyfill
And then adding it to the global namespace in your main entrypoint:
import "web-streams-polyfill/es6";
Additionally you'll have to polyfill `structuredClone`, eg. by installing `core-js` and following the instructions [here](https://github.com/zloirock/core-js).
If you are running Node.js 18+, you do not need to do anything.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Introduction
](/v0.1/docs/get_started/introduction/)[
Next
Quickstart
](/v0.1/docs/get_started/quickstart/)
* [Supported Environments](#supported-environments)
* [Installation](#installation-1)
* [TypeScript](#typescript)
* [Installing integration packages](#installing-integration-packages)
* [@langchain/community](#langchaincommunity)
* [@langchain/core](#langchaincore)
* [Loading the library](#loading-the-library)
* [ESM](#esm)
* [CommonJS](#commonjs)
* [Cloudflare Workers](#cloudflare-workers)
* [Vercel / Next.js](#vercel--nextjs)
* [Deno / Supabase Edge Functions](#deno--supabase-edge-functions)
* [Browser](#browser)
* [Unsupported: Node.js 16](#unsupported-nodejs-16)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/get_started/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Get started](/v0.1/docs/get_started/)
* Quickstart
On this page
Quickstart
==========
In this quickstart we'll show you how to:
* Get setup with LangChain and LangSmith
* Use the most basic and common components of LangChain: prompt templates, models, and output parsers
* Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining
* Build a simple application with LangChain
* Trace your application with LangSmith
That's a fair amount to cover! Let's dive in.
Installation[](#installation "Direct link to Installation")
------------------------------------------------------------
To install LangChain run:
* npm
* Yarn
* pnpm
npm install langchain
yarn add langchain
pnpm add langchain
For more details, see our [Installation guide](/v0.1/docs/get_started/installation/).
LangSmith[](#langsmith "Direct link to LangSmith")
---------------------------------------------------
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY="..."
Building with LangChain[](#building-with-langchain "Direct link to Building with LangChain")
---------------------------------------------------------------------------------------------
LangChain enables building applications that connect external sources of data and computation to LLMs.
In this quickstart, we will walk through a few different ways of doing that:
* We will start with a simple LLM chain, which just relies on information in the prompt template to respond.
* Next, we will build a retrieval chain, which fetches data from a separate database and passes that into the prompt template.
* We will then add in chat history, to create a conversational retrieval chain. This allows you interact in a chat manner with this LLM, so it remembers previous questions.
* Finally, we will build an agent - which utilizes an LLM to determine whether or not it needs to fetch data to answer questions.
We will cover these at a high level, but keep in mind there is a lot more to each piece! We will link to more in-depth docs as appropriate.
LLM Chain[](#llm-chain "Direct link to LLM Chain")
---------------------------------------------------
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
* OpenAI
* Local (using Ollama)
* Anthropic
First we'll need to install the LangChain OpenAI integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Accessing the API requires an API key, which you can get by creating an account [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable:
OPENAI_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the OpenAI Chat Model class:
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ apiKey: "...",});
Otherwise you can initialize without any params:
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({});
[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.
First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:
* [Download](https://ollama.ai/download)
* Fetch a model via e.g. `ollama pull mistral`
Then, make sure the Ollama server is running. Next, you'll need to install the LangChain community package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
And then you can do:
import { ChatOllama } from "@langchain/community/chat_models/ollama";const chatModel = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "mistral",});
First we'll need to install the LangChain Anthropic integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Accessing the API requires an API key, which you can get by creating an account [here](https://console.anthropic.com/). Once we have a key we'll want to set it as an environment variable:
ANTHROPIC_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the Anthropic Chat Model class:
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ apiKey: "...",});
Otherwise you can initialize without any params:
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({});
Once you've installed and initialized the LLM of your choice, we can try using it! Let's ask it what LangSmith is - this is something that wasn't present in the training data so it shouldn't have a very good response.
await chatModel.invoke("what is LangSmith?");
AIMessage { content: 'LangSmith refers to the combination of two surnames, Lang and Smith. It is most commonly used as a fictional or hypothetical name for a person or a company. This term may also refer to specific individuals or entities named LangSmith in certain contexts.', additional_kwargs: { function_call: undefined, tool_calls: undefined }}
We can also guide it's response with a prompt template. Prompt templates are used to convert raw user input to a better input to the LLM.
import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a world class technical documentation writer."], ["user", "{input}"],]);
We can now combine these into a simple LLM chain:
const chain = prompt.pipe(chatModel);
We can now invoke it and ask the same question:
await chain.invoke({ input: "what is LangSmith?",});
AIMessage { content: 'LangSmith is a powerful programming language created for high-performance software development. It is designed to be efficient, intuitive, and capable of handling complex computations and data manipulations. With its extensive set of features and libraries, LangSmith provides developers with the tools necessary to build robust and scalable applications.\n' + '\n' + 'Some key features of LangSmith include:\n' + '\n' + '1. Strong typing: LangSmith enforces type safety, preventing common programming errors and ensuring code reliability.\n' + '\n' + '2. Advanced memory management: The language provides built-in memory management mechanisms, such as automatic garbage collection, to optimize memory usage and reduce the risk of memory leaks.\n' + '\n' + '3. Multi-paradigm support: LangSmith supports both procedural and object-oriented programming paradigms, giving developers the flexibility to choose the most suitable approach for their projects.\n' + '\n' + '4. Modular design: The language promotes modular programming, allowing developers to organize their code into reusable components for easier maintenance and collaboration.\n' + '\n' + '5. High-performance libraries: LangSmith offers a rich set of libraries for various domains, including graphics, networking, database access, and more. These libraries enhance productivity by providing pre-built solutions for common tasks.\n' + '\n' + '6. Interoperability: LangSmith enables seamless integration with other programming languages, allowing developers to leverage existing codebases and resources.\n' + '\n' + "7. Extensibility: Developers can extend LangSmith's functionality through custom libraries and modules, allowing for the creation of domain-specific solutions.\n" + '\n' + 'Overall, LangSmith aims to provide a robust and efficient development environment for creating software applications across various domains, from scientific simulations to web development and beyond.', additional_kwargs: { function_call: undefined, tool_calls: undefined }}
The model hallucinated an incorrect answer this time, but it did respond in a more proper tone for a technical writer!
The output of a ChatModel (and therefore, of this chain) is a message. However, it's often much more convenient to work with strings. Let's add a simple output parser to convert the chat message to a string.
import { StringOutputParser } from "@langchain/core/output_parsers";const outputParser = new StringOutputParser();const llmChain = prompt.pipe(chatModel).pipe(outputParser);await llmChain.invoke({ input: "what is LangSmith?",});
LangSmith is a sophisticated online language translation tool. It leverages artificial intelligence and machine learning algorithms to provide accurate and efficient translation services across multiple languages. Whether it's translating documents, websites, or text snippets, LangSmith offers a seamless, user-friendly experience while maintaining the integrity and nuances of the original content. Its advanced features include context-aware translations, language customization options, and quality assurance checks, making it an invaluable tool for businesses, individuals, and language professionals alike.
### Diving deeper[](#diving-deeper "Direct link to Diving deeper")
We've now successfully set up a basic LLM chain. We only touched on the basics of prompts, models, and output parsers - for a deeper dive into everything mentioned here, see [this section of documentation](/v0.1/docs/modules/model_io/).
Retrieval Chain[](#retrieval-chain "Direct link to Retrieval Chain")
---------------------------------------------------------------------
In order to properly answer the original question ("what is LangSmith?") and avoid hallucinations, we need to provide additional context to the LLM. We can do this via retrieval. Retrieval is useful when you have too much data to pass to the LLM directly. You can then use a retriever to fetch only the most relevant pieces and pass those in.
In this process, we will look up relevant documents from a Retriever and then pass them into the prompt. A Retriever can be backed by anything - a SQL table, the internet, etc - but in this instance we will populate a vector store and use that as a retriever. For more information on vectorstores, see [this documentation](/v0.1/docs/modules/data_connection/vectorstores/).
First, we need to load the data that we want to index. We'll use [a document loader](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/) that uses the popular [Cheerio npm package](https://www.npmjs.com/package/cheerio) as a peer dependency to parse data from webpages. Install it as shown below:
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Then, use it like this:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const docs = await loader.load();console.log(docs.length);console.log(docs[0].pageContent.length);
45772
Note that the size of the loaded document is large and may exceed the maximum amount of data we can pass in a single model call. We can split the document into more manageable chunks to get around this limitation and to reduce the amount of distraction to the model using a [text splitter](/v0.1/docs/modules/data_connection/document_transformers/):
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const splitter = new RecursiveCharacterTextSplitter();const splitDocs = await splitter.splitDocuments(docs);console.log(splitDocs.length);console.log(splitDocs[0].pageContent.length);
60441
Next, we need to index the loaded documents into a vectorstore. This requires a few components, namely an [embedding model](/v0.1/docs/modules/data_connection/text_embedding/) and a [vectorstore](/v0.1/docs/modules/data_connection/vectorstores/).
There are many options for both components. Here are some examples for accessing via OpenAI and via local models:
* OpenAI
* Local (using Ollama)
Make sure you have the `@langchain/openai` package installed and the appropriate environment variables set (these are the same as needed for the model above).
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings();
Make sure you have Ollama running (same set up as with the model).
import { OllamaEmbeddings } from "@langchain/community/embeddings/ollama";const embeddings = new OllamaEmbeddings({ model: "nomic-embed-text", maxConcurrency: 5,});
Now, we can use this embedding model to ingest documents into a vectorstore. We will use a [simple in-memory demo vectorstore](/v0.1/docs/integrations/vectorstores/memory/) for simplicity's sake:
**Note:** If you are using local embeddings, this ingestion process may take some time depending on your local hardware.
import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( splitDocs, embeddings);
The LangChain vectorstore class will automatically prepare each raw document using the embeddings model.
Now that we have this data indexed in a vectorstore, we will create a retrieval chain. This chain will take an incoming question, look up relevant documents, then pass those documents along with the original question into an LLM and ask it to answer the original question.
First, let's set up the chain that takes a question and the retrieved documents and generates an answer.
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromTemplate(`Answer the following question based only on the provided context:<context>{context}</context>Question: {input}`);const documentChain = await createStuffDocumentsChain({ llm: chatModel, prompt,});
If we wanted to, we could run this ourselves by passing in documents directly:
import { Document } from "@langchain/core/documents";await documentChain.invoke({ input: "what is LangSmith?", context: [ new Document({ pageContent: "LangSmith is a platform for building production-grade LLM applications.", }), ],});
LangSmith is a platform for building production-grade Large Language Model (LLM) applications.
However, we want the documents to first come from the retriever we just set up. That way, for a given question we can use the retriever to dynamically select the most relevant documents and pass those in.
import { createRetrievalChain } from "langchain/chains/retrieval";const retriever = vectorstore.asRetriever();const retrievalChain = await createRetrievalChain({ combineDocsChain: documentChain, retriever,});
We can now invoke this chain. This returns an object - the response from the LLM is in the `answer` key:
const result = await retrievalChain.invoke({ input: "what is LangSmith?",});console.log(result.answer);
LangSmith is a tool developed by LangChain that is used for debugging and monitoring LLMs, chains, and agents in order to improve their performance and reliability for use in production.
tip
Check out this public [LangSmith trace](https://smith.langchain.com/public/b4c3e7bd-d850-4cb2-9c44-2e8c2daed7ba/r) showing the steps of the retrieval chain.
This answer should be much more accurate!
### Diving Deeper[](#diving-deeper-1 "Direct link to Diving Deeper")
We've now successfully set up a basic retrieval chain. We only touched on the basics of retrieval - for a deeper dive into everything mentioned here, see [this section of documentation](/v0.1/docs/modules/data_connection/).
Conversational Retrieval Chain[](#conversational-retrieval-chain "Direct link to Conversational Retrieval Chain")
------------------------------------------------------------------------------------------------------------------
The chain we've created so far can only answer single questions. One of the main types of LLM applications that people are building are chat bots. So how do we turn this chain into one that can answer follow up questions?
We can still use the `createRetrievalChain` function, but we need to change two things:
1. The retrieval method should now not just work on the most recent input, but rather should take the whole history into account.
2. The final LLM chain should likewise take the whole history into account.
#### Updating Retrieval[](#updating-retrieval "Direct link to Updating Retrieval")
In order to update retrieval, we will create a new chain. This chain will take in the most recent input (`input`) and the conversation history (`chat_history`) and use an LLM to generate a search query.
import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";import { MessagesPlaceholder } from "@langchain/core/prompts";const historyAwarePrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("chat_history"), ["user", "{input}"], [ "user", "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation", ],]);const historyAwareRetrieverChain = await createHistoryAwareRetriever({ llm: chatModel, retriever, rephrasePrompt: historyAwarePrompt,});
We can test this "history aware retriever" out by creating a situation where the user is asking a follow up question:
import { HumanMessage, AIMessage } from "@langchain/core/messages";const chatHistory = [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage("Yes!"),];await historyAwareRetrieverChain.invoke({ chat_history: chatHistory, input: "Tell me how!",});
tip
Here's a public [LangSmith trace](https://smith.langchain.com/public/0f4e5ff4-c640-4fe1-ae93-8eb5f32382fc/r) of the above run!
The above trace illustrates that this returns documents about testing in LangSmith. This is because the LLM generated a new query, combining the chat history with the follow up question.
Now that we have this new retriever, we can create a new chain to continue the conversation with these retrieved documents in mind:
const historyAwareRetrievalPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], new MessagesPlaceholder("chat_history"), ["user", "{input}"],]);const historyAwareCombineDocsChain = await createStuffDocumentsChain({ llm: chatModel, prompt: historyAwareRetrievalPrompt,});const conversationalRetrievalChain = await createRetrievalChain({ retriever: historyAwareRetrieverChain, combineDocsChain: historyAwareCombineDocsChain,});
Let's now test this out end-to-end!
const result2 = await conversationalRetrievalChain.invoke({ chat_history: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage("Yes!"), ], input: "tell me how",});console.log(result2.answer);
LangSmith can help test and debug your LLM (Language Model) applications in several ways:1. Exact Input/Output Visualization: LangSmith provides a straightforward visualization of the exact inputs and outputs for all LLM calls. This helps you understand the specific inputs provided to the model and the corresponding output generated.2. Editing Prompts: If you encounter a bad output or want to experiment with different inputs, you can edit the prompts directly in LangSmith. By modifying the prompt, you can observe the resulting changes in the output. LangSmith includes a playground feature where you can modify prompts and re-run them multiple times to analyze the impact on the output.3. Constructing Datasets: LangSmith simplifies the process of constructing datasets for testing changes in your application. You can quickly edit examples and add them to datasets, expanding your evaluation sets or fine-tuning your model for improved quality or reduced costs.4. Monitoring and Troubleshooting: Once your application is ready for production, LangSmith can be used to monitor its performance. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. LangSmith also allows you to associate feedback programmatically with runs, enabling you to track performance over time and pinpoint underperforming data points.In summary, LangSmith helps you test, debug, and monitor your LLM applications, providing tools to visualize inputs/outputs, edit prompts, construct datasets, and monitor performance.
tip
Here's a public [LangSmith trace](https://smith.langchain.com/public/bd2cc487-cdab-4934-b1ee-fceec154992b/r) of the above run!
We can see that this gives a coherent answer - we've successfully turned our retrieval chain into a chatbot!
Agent[](#agent "Direct link to Agent")
---------------------------------------
We've so far created examples of chains - where each step is known ahead of time. The final thing we will create is an agent - where the LLM decides what steps to take.
**NOTE: for this example we will only show how to create an agent using OpenAI models, as local models runnable on consumer hardware are not reliable enough yet.**
One of the first things to do when building an agent is to decide what tools it should have access to. For this example, we will give the agent access two tools:
1. The retriever we just created. This will let it easily answer questions about LangSmith
2. A search tool. This will let it easily answer questions that require up to date information.
First, let's set up a tool for the retriever we just created:
import { createRetrieverTool } from "langchain/tools/retriever";const retrieverTool = await createRetrieverTool(retriever, { name: "langsmith_search", description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",});
The search tool that we will use is [Tavily](/v0.1/docs/integrations/tools/tavily_search/). This will require you to create an API key (they have generous free tier). After signing up and creating one [in their dashboard](https://app.tavily.com/), you need to set it as an environment variable:
export TAVILY_API_KEY=...
If you do not want to set up an API key, you can skip creating this tool.
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";const searchTool = new TavilySearchResults();
We can now create a list of the tools we want to work with:
const tools = [retrieverTool, searchTool];
Now that we have the tools, we can create an agent to use them and an executor to run the agent. We will go over this pretty quickly. For a deeper dive into what exactly is going on, check out the [agent documentation pages](/v0.1/docs/modules/agents/).
import { pull } from "langchain/hub";import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst agentPrompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const agentModel = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createOpenAIFunctionsAgent({ llm: agentModel, tools, prompt: agentPrompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: true,});
We can now invoke the agent and see how it responds! We can ask it questions about LangSmith:
const agentResult = await agentExecutor.invoke({ input: "how can LangSmith help with testing?",});console.log(agentResult.output);
LangSmith can help with testing in the following ways:1. Debugging: LangSmith helps in debugging unexpected end results, agent looping, slow chains, and token usage. It provides a visualization of the exact inputs/outputs to all LLM calls, making it easier to understand them.2. Modifying Prompts: LangSmith allows you to modify prompts and observe resulting changes to the output. This feature supports OpenAI and Anthropic models and works for LLM and Chat Model calls.3. Dataset Construction: LangSmith simplifies dataset construction for testing changes. It provides a straightforward visualization of inputs/outputs to LLM calls, allowing you to understand them easily.4. Monitoring: LangSmith can be used to monitor applications in production by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for programmatically associating feedback with runs to track performance over time.Overall, LangSmith is a valuable tool for testing, debugging, and monitoring applications that utilize language models and agents.
tip
Here's a public [LangSmith trace](https://smith.langchain.com/public/d87c5588-7edc-4378-800a-3cf741c7dc05/r) of the above run!
We can ask it about the weather:
const agentResult2 = await agentExecutor.invoke({ input: "what is the weather in SF?",});console.log(agentResult2.output);
The weather in San Francisco, California for December 29, 2023 is expected to have average high temperatures of 50 to 65 °F and average low temperatures of 40 to 55 °F. There may be periods of rain with a high of 59°F and winds from the SSE at 10 to 20 mph. For more detailed information, you can visit [this link](https://www.weathertab.com/en/g/o/12/united-states/california/san-francisco/).
tip
Here's a public [LangSmith trace](https://smith.langchain.com/public/94339def-8628-4335-ae7d-10776e528beb/r) of the above run!
We can have conversations with it:
const agentResult3 = await agentExecutor.invoke({ chat_history: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage("Yes!"), ], input: "Tell me how",});console.log(agentResult3.output);
LangSmith can help test your LLM applications by providing the following features:1. Debugging: LangSmith helps in debugging LLMs, chains, and agents by providing a visualization of the exact inputs/outputs to all LLM calls, allowing you to understand them easily.2. Prompt Editing: You can modify the prompt and re-run it to observe the resulting changes to the output as many times as needed using LangSmith's playground feature.3. Monitoring: LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.4. Feedback and Dataset Expansion: You can associate feedback programmatically with runs, add examples to datasets, and fine-tune a model for improved quality or reduced costs.5. Failure Analysis: LangSmith allows you to identify how your chain can fail and monitor these failures, which can be valuable data points for testing future chain versions.These features make LangSmith a valuable tool for testing and improving LLM applications.
tip
Here's a public [LangSmith trace](https://smith.langchain.com/public/e73f19b8-323c-41ce-ad75-d354c6f8b3aa/r) of the above run!
Diving Deeper[](#diving-deeper-2 "Direct link to Diving Deeper")
-----------------------------------------------------------------
We've now successfully set up a basic agent. We only touched on the basics of agents - for a deeper dive into everything mentioned here, see this [section of documentation](/v0.1/docs/modules/agents/).
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
We've touched on how to build an application with LangChain, and how to trace it with LangSmith. There are a lot more features than we can cover here. To continue on your journey, we recommend you read the following (in order):
* All of these features are backed by [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/) - a way to chain these components together. Check out that documentation to better understand how to create custom chains.
* [Model I/O](/v0.1/docs/modules/model_io/) covers more details of prompts, LLMs, and output parsers.
* [Retrieval](/v0.1/docs/modules/data_connection/) covers more details of everything related to retrieval.
* [Agents](/v0.1/docs/modules/agents/) covers details of everything related to agents.
* Explore common [end-to-end use cases](/v0.1/docs/use_cases/).
* [Read up on LangSmith](https://docs.smith.langchain.com/), the platform for debugging, testing, monitoring and more.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Installation
](/v0.1/docs/get_started/installation/)[
Next
LangChain Expression Language (LCEL)
](/v0.1/docs/expression_language/)
* [Installation](#installation)
* [LangSmith](#langsmith)
* [Building with LangChain](#building-with-langchain)
* [LLM Chain](#llm-chain)
* [Diving deeper](#diving-deeper)
* [Retrieval Chain](#retrieval-chain)
* [Diving Deeper](#diving-deeper-1)
* [Conversational Retrieval Chain](#conversational-retrieval-chain)
* [Agent](#agent)
* [Diving Deeper](#diving-deeper-2)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* LangChain Expression Language
LangChain Expression Language (LCEL)
====================================
LangChain Expression Language or LCEL is a declarative way to easily compose chains together. Any chain constructed this way will automatically have full sync, async, and streaming support.
If you're looking for a good place to get started, check out the [Cookbook section](/v0.1/docs/expression_language/cookbook/) - it shows off the various Expression Language pieces in order from simple to more complex.
#### [Interface](/v0.1/docs/expression_language/interface/)[](#interface "Direct link to interface")
The base interface shared by all LCEL objects
#### [Cookbook](/v0.1/docs/expression_language/cookbook/)[](#cookbook "Direct link to cookbook")
Examples of common LCEL usage patterns
#### [Why use LCEL](/v0.1/docs/expression_language/why/)[](#why-use-lcel "Direct link to why-use-lcel")
A deeper dive into the benefits of LCEL
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/get_started/quickstart/)[
Next
Get started
](/v0.1/docs/expression_language/get_started/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/why/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* Why use LCEL?
Why use LCEL?
=============
The LangChain Expression Language was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully running in production LCEL chains with 100s of steps). To highlight a few of the reasons you might want to use LCEL:
* optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, for the smallest possible latency.
* support for retries and fallbacks: more recently we’ve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
* accessing intermediate results: for more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. We’ve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and it’s available on every LangServe server.
* tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand what’s happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Get started
](/v0.1/docs/expression_language/get_started/)[
Next
Interface
](/v0.1/docs/expression_language/interface/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/get_started/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* Get started
On this page
Get started
===========
LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging.
Basic example: prompt + model + output parser[](#basic-example-prompt--model--output-parser "Direct link to Basic example: prompt + model + output parser")
------------------------------------------------------------------------------------------------------------------------------------------------------------
The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const prompt = ChatPromptTemplate.fromMessages([ ["human", "Tell me a short joke about {topic}"],]);const model = new ChatOpenAI({});const outputParser = new StringOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const response = await chain.invoke({ topic: "ice cream",});console.log(response);/**Why did the ice cream go to the gym?Because it wanted to get a little "cone"ditioning! */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
tip
[LangSmith trace](https://smith.langchain.com/public/dcac6d79-5254-4889-a974-4b3abaf605b4/r)
Notice in this line we're chaining our prompt, LLM model and output parser together:
const chain = prompt.pipe(model).pipe(outputParser);
The `.pipe()` method allows for chaining together any number of runnables. It will pass the output of one through to the input of the next.
Here, the prompt is passed a `topic` and when invoked it returns a formatted string with the `{topic}` input variable replaced with the string we passed to the invoke call. That string is then passed as the input to the LLM which returns a `BaseMessage` object. Finally, the output parser takes that `BaseMessage` object and returns the content of that object as a string.
### 1\. Prompt[](#1-prompt "Direct link to 1. Prompt")
`prompt` is a `BasePromptTemplate`, which means it takes in an object of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing BaseMessages and for producing a string.
import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["human", "Tell me a short joke about {topic}"],]);const promptValue = await prompt.invoke({ topic: "ice cream" });console.log(promptValue);/**ChatPromptValue { messages: [ HumanMessage { content: 'Tell me a short joke about ice cream', name: undefined, additional_kwargs: {} } ]} */const promptAsMessages = promptValue.toChatMessages();console.log(promptAsMessages);/**[ HumanMessage { content: 'Tell me a short joke about ice cream', name: undefined, additional_kwargs: {} }] */const promptAsString = promptValue.toString();console.log(promptAsString);/**Human: Tell me a short joke about ice cream */
#### API Reference:
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
### 2\. Model[](#2-model "Direct link to 2. Model")
The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `BaseMessage`.
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({});const promptAsString = "Human: Tell me a short joke about ice cream";const response = await model.invoke(promptAsString);console.log(response);/**AIMessage { content: 'Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!', name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }} */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
If our model was an LLM, it would output a string.
import { OpenAI } from "@langchain/openai";const model = new OpenAI({});const promptAsString = "Human: Tell me a short joke about ice cream";const response = await model.invoke(promptAsString);console.log(response);/**Why did the ice cream go to therapy?Because it was feeling a little rocky road. */
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
### 3\. Output parser[](#3-output-parser "Direct link to 3. Output parser")
And lastly we pass our `model` output to the `outputParser`, which is a `BaseOutputParser` meaning it takes either a string or a `BaseMessage` as input. The `StringOutputParser` specifically simple converts any input into a string.
import { AIMessage } from "@langchain/core/messages";import { StringOutputParser } from "@langchain/core/output_parsers";const outputParser = new StringOutputParser();const message = new AIMessage( 'Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!');const parsed = await outputParser.invoke(message);console.log(parsed);/**Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education! */
#### API Reference:
* [AIMessage](https://api.js.langchain.com/classes/langchain_core_messages.AIMessage.html) from `@langchain/core/messages`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
RAG Search Example[](#rag-search-example "Direct link to RAG Search Example")
------------------------------------------------------------------------------
For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions.
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { Document } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableLambda, RunnableMap, RunnablePassthrough,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const vectorStore = await HNSWLib.fromDocuments( [ new Document({ pageContent: "Harrison worked at Kensho" }), new Document({ pageContent: "Bears like to eat honey." }), ], new OpenAIEmbeddings());const retriever = vectorStore.asRetriever(1);const prompt = ChatPromptTemplate.fromMessages([ [ "ai", `Answer the question based on only the following context: {context}`, ], ["human", "{question}"],]);const model = new ChatOpenAI({});const outputParser = new StringOutputParser();const setupAndRetrieval = RunnableMap.from({ context: new RunnableLambda({ func: (input: string) => retriever.invoke(input).then((response) => response[0].pageContent), }).withConfig({ runName: "contextRetriever" }), question: new RunnablePassthrough(),});const chain = setupAndRetrieval.pipe(prompt).pipe(model).pipe(outputParser);const response = await chain.invoke("Where did Harrison work?");console.log(response);/**Harrison worked at Kensho. */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [RunnableLambda](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableLambda.html) from `@langchain/core/runnables`
* [RunnableMap](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableMap.html) from `@langchain/core/runnables`
* [RunnablePassthrough](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
tip
[LangSmith trace](https://smith.langchain.com/public/f0205e20-c46f-47cd-a3a4-6a95451f8a25/r)
In this chain we add some extra logic around retrieving context from a vector store.
We first instantiated our model, vector store and output parser. Then we defined our prompt, which takes in two input variables:
* `context` -> this is a string which is returned from our vector store based on a semantic search from the input.
* `question` -> this is the question we want to ask.
Next we created a `setupAndRetriever` runnable. This has two components which return the values required by our prompt:
* `context` -> this is a `RunnableLambda` which takes the input from the `.invoke()` call, makes a request to our vector store, and returns the first result.
* `question` -> this uses a `RunnablePassthrough` which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined.
Both of these are wrapped inside a `RunnableMap`. This is a special type of runnable that takes an object of runnables and executes them all in parallel. It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables.
Finally, we pass the output of the `setupAndRetriever` to our `prompt` and then to our `model` and `outputParser` as before.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
LangChain Expression Language (LCEL)
](/v0.1/docs/expression_language/)[
Next
Why use LCEL?
](/v0.1/docs/expression_language/why/)
* [Basic example: prompt + model + output parser](#basic-example-prompt--model--output-parser)
* [1\. Prompt](#1-prompt)
* [2\. Model](#2-model)
* [3\. Output parser](#3-output-parser)
* [RAG Search Example](#rag-search-example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/interface/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* Interface
On this page
Interface
=========
In an effort to make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol that most components implement. This is a standard interface with a few different methods, which make it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:
* [`stream`](/v0.1/docs/expression_language/interface/#stream): stream back chunks of the response
* [`invoke`](/v0.1/docs/expression_language/interface/#invoke): call the chain on an input
* [`batch`](/v0.1/docs/expression_language/interface/#batch): call the chain on a list of inputs
* [`streamLog`](/v0.1/docs/expression_language/interface/#stream-log): stream back intermediate steps as they happen, in addition to the final response
* [`streamEvents`](/v0.1/docs/expression_language/interface/#stream-events): **beta** stream events as they happen in the chain (introduced in `@langchain/core` 0.1.27)
The **input type** varies by component :
Component
Input Type
Prompt
Object
Retriever
Single string
LLM, ChatModel
Single string, list of chat messages or PromptValue
Tool
Single string, or object, depending on the tool
OutputParser
The output of an LLM or ChatModel
The **output type** also varies by component :
Component
Output Type
LLM
String
ChatModel
ChatMessage
Prompt
PromptValue
Retriever
List of documents
Tool
Depends on the tool
OutputParser
Depends on the parser
You can combine runnables (and runnable-like objects such as functions and objects whose values are all functions) into sequences in two ways:
* Call the `.pipe` instance method, which takes another runnable-like as an argument
* Use the `RunnableSequence.from([])` static method with an array of runnable-likes, which will run in sequence when invoked
See below for examples of how this looks.
Stream[](#stream "Direct link to Stream")
------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";const model = new ChatOpenAI({});const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");const chain = promptTemplate.pipe(model);const stream = await chain.stream({ topic: "bears" });// Each chunk has the same interface as a chat messagefor await (const chunk of stream) { console.log(chunk?.content);}/*Why don't bears wear shoes?Because they have bear feet!*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Invoke[](#invoke "Direct link to Invoke")
------------------------------------------
import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";const model = new ChatOpenAI({});const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");// You can also create a chain using an array of runnablesconst chain = RunnableSequence.from([promptTemplate, model]);const result = await chain.invoke({ topic: "bears" });console.log(result);/* AIMessage { content: "Why don't bears wear shoes?\n\nBecause they have bear feet!", }*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
Batch[](#batch "Direct link to Batch")
---------------------------------------
import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";const model = new ChatOpenAI({});const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");const chain = promptTemplate.pipe(model);const result = await chain.batch([{ topic: "bears" }, { topic: "cats" }]);console.log(result);/* [ AIMessage { content: "Why don't bears wear shoes?\n\nBecause they have bear feet!", }, AIMessage { content: "Why don't cats play poker in the wild?\n\nToo many cheetahs!" } ]*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
You can also pass additional arguments to the call. The standard LCEL config object contains an option to set maximum concurrency, and an additional `batch()` specific config object that includes an option for whether or not to return exceptions instead of throwing them (useful for gracefully handling failures!):
import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";const model = new ChatOpenAI({ model: "badmodel",});const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");const chain = promptTemplate.pipe(model);const result = await chain.batch( [{ topic: "bears" }, { topic: "cats" }], { maxConcurrency: 1 }, { returnExceptions: true });console.log(result);/* [ NotFoundError: The model `badmodel` does not exist at Function.generate (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/error.ts:71:6) at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:381:13) at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:442:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///Users/jacoblee/langchain/langchainjs/langchain/dist/chat_models/openai.js:514:29 at RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) { status: 404, NotFoundError: The model `badmodel` does not exist at Function.generate (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/error.ts:71:6) at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:381:13) at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:442:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///Users/jacoblee/langchain/langchainjs/langchain/dist/chat_models/openai.js:514:29 at RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) { status: 404, ]*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Stream log[](#stream-log "Direct link to Stream log")
------------------------------------------------------
All runnables also have a method called `.streamLog()` which is used to stream all or part of the intermediate steps of your chain/sequence as they happen.
This is useful to show progress to the user, to use intermediate results, or to debug your chain. You can stream all steps (default) or include/exclude steps by name, tags or metadata.
This method yields [JSONPatch](https://jsonpatch.com/) ops that when applied in the same order as received build up the RunState.
To reconstruct the JSONPatches into a single JSON object you can use the [`applyPatch`](https://api.js.langchain.com/functions/langchain_core_utils_json_patch.applyPatch.html) method. The example below demonstrates how to pass the patch to the `applyPatch` method.
Here's an example with streaming intermediate documents from a retrieval chain:
* npm
* Yarn
* pnpm
npm install @langchain/community @langchain/openai
yarn add @langchain/community @langchain/openai
pnpm add @langchain/community @langchain/openai
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { formatDocumentsAsString } from "langchain/util/document";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate,} from "@langchain/core/prompts";// Initialize the LLM to use to answer the question.const model = new ChatOpenAI({});const vectorStore = await HNSWLib.fromTexts( [ "mitochondria is the powerhouse of the cell", "mitochondria is made of lipids", ], [{ id: 1 }, { id: 2 }], new OpenAIEmbeddings());// Initialize a retriever wrapper around the vector storeconst vectorStoreRetriever = vectorStore.asRetriever();// Create a system & human prompt for the chat modelconst SYSTEM_TEMPLATE = `Use the following pieces of context to answer the question at the end.If you don't know the answer, just say that you don't know, don't try to make up an answer.----------------{context}`;const messages = [ SystemMessagePromptTemplate.fromTemplate(SYSTEM_TEMPLATE), HumanMessagePromptTemplate.fromTemplate("{question}"),];const prompt = ChatPromptTemplate.fromMessages(messages);const chain = RunnableSequence.from([ { context: vectorStoreRetriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);const logStream = await chain.streamLog("What is the powerhouse of the cell?");let state;for await (const logPatch of logStream) { console.log(JSON.stringify(logPatch)); if (!state) { state = logPatch; } else { state = state.concat(logPatch); }}console.log("aggregate", state);/* {"ops":[{"op":"replace","path":"","value":{"id":"5a79d2e7-171a-4034-9faa-63af88e5a451","streamed_output":[],"logs":{}}}]} {"ops":[{"op":"add","path":"/logs/RunnableMap","value":{"id":"5948dd9f-b827-45f8-9fa6-74e5cc972a56","name":"RunnableMap","type":"chain","tags":["seq:step:1"],"metadata":{},"start_time":"2023-12-23T00:20:46.664Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/RunnableSequence","value":{"id":"e9e9ef5e-3a04-4110-9a24-517c929b9137","name":"RunnableSequence","type":"chain","tags":["context"],"metadata":{},"start_time":"2023-12-23T00:20:46.804Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/RunnablePassthrough","value":{"id":"4c79d835-87e5-4ff8-b560-987aea83c0e4","name":"RunnablePassthrough","type":"chain","tags":["question"],"metadata":{},"start_time":"2023-12-23T00:20:46.805Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/RunnablePassthrough/final_output","value":{"output":"What is the powerhouse of the cell?"}},{"op":"add","path":"/logs/RunnablePassthrough/end_time","value":"2023-12-23T00:20:46.947Z"}]} {"ops":[{"op":"add","path":"/logs/VectorStoreRetriever","value":{"id":"1e169f18-711e-47a3-910e-ee031f70b6e0","name":"VectorStoreRetriever","type":"retriever","tags":["seq:step:1","hnswlib"],"metadata":{},"start_time":"2023-12-23T00:20:47.082Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/VectorStoreRetriever/final_output","value":{"documents":[{"pageContent":"mitochondria is the powerhouse of the cell","metadata":{"id":1}},{"pageContent":"mitochondria is made of lipids","metadata":{"id":2}}]}},{"op":"add","path":"/logs/VectorStoreRetriever/end_time","value":"2023-12-23T00:20:47.398Z"}]} {"ops":[{"op":"add","path":"/logs/RunnableLambda","value":{"id":"a0d61a88-8282-42be-8949-fb0e8f8f67cd","name":"RunnableLambda","type":"chain","tags":["seq:step:2"],"metadata":{},"start_time":"2023-12-23T00:20:47.495Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/RunnableLambda/final_output","value":{"output":"mitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids"}},{"op":"add","path":"/logs/RunnableLambda/end_time","value":"2023-12-23T00:20:47.604Z"}]} {"ops":[{"op":"add","path":"/logs/RunnableSequence/final_output","value":{"output":"mitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids"}},{"op":"add","path":"/logs/RunnableSequence/end_time","value":"2023-12-23T00:20:47.690Z"}]} {"ops":[{"op":"add","path":"/logs/RunnableMap/final_output","value":{"question":"What is the powerhouse of the cell?","context":"mitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids"}},{"op":"add","path":"/logs/RunnableMap/end_time","value":"2023-12-23T00:20:47.780Z"}]} {"ops":[{"op":"add","path":"/logs/ChatPromptTemplate","value":{"id":"5b6cff77-0c52-4218-9bde-d92c33ad12f3","name":"ChatPromptTemplate","type":"prompt","tags":["seq:step:2"],"metadata":{},"start_time":"2023-12-23T00:20:47.864Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/ChatPromptTemplate/final_output","value":{"lc":1,"type":"constructor","id":["langchain_core","prompt_values","ChatPromptValue"],"kwargs":{"messages":[{"lc":1,"type":"constructor","id":["langchain_core","messages","SystemMessage"],"kwargs":{"content":"Use the following pieces of context to answer the question at the end.\nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\nmitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids","additional_kwargs":{}}},{"lc":1,"type":"constructor","id":["langchain_core","messages","HumanMessage"],"kwargs":{"content":"What is the powerhouse of the cell?","additional_kwargs":{}}}]}}},{"op":"add","path":"/logs/ChatPromptTemplate/end_time","value":"2023-12-23T00:20:47.956Z"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI","value":{"id":"0cc3b220-ca7f-4fd3-88d5-bea1f7417c3d","name":"ChatOpenAI","type":"llm","tags":["seq:step:3"],"metadata":{},"start_time":"2023-12-23T00:20:48.126Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/StrOutputParser","value":{"id":"47d9bd52-c14a-420d-8d52-1106d751581c","name":"StrOutputParser","type":"parser","tags":["seq:step:4"],"metadata":{},"start_time":"2023-12-23T00:20:48.666Z","streamed_output_str":[]}}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":""}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":""}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":"The"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":"The"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" mitochond"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" mitochond"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":"ria"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":"ria"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" is"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" is"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" the"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" the"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" powerhouse"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" powerhouse"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" of"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" of"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" the"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" the"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" cell"}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":" cell"}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":"."}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":"."}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":""}]} {"ops":[{"op":"add","path":"/streamed_output/-","value":""}]} {"ops":[{"op":"add","path":"/logs/ChatOpenAI/final_output","value":{"generations":[[{"text":"The mitochondria is the powerhouse of the cell.","generationInfo":{"prompt":0,"completion":0},"message":{"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessageChunk"],"kwargs":{"content":"The mitochondria is the powerhouse of the cell.","additional_kwargs":{}}}}]]}},{"op":"add","path":"/logs/ChatOpenAI/end_time","value":"2023-12-23T00:20:48.841Z"}]} {"ops":[{"op":"add","path":"/logs/StrOutputParser/final_output","value":{"output":"The mitochondria is the powerhouse of the cell."}},{"op":"add","path":"/logs/StrOutputParser/end_time","value":"2023-12-23T00:20:48.945Z"}]} {"ops":[{"op":"replace","path":"/final_output","value":{"output":"The mitochondria is the powerhouse of the cell."}}]}*/// Aggregate/**aggregate { id: '1ed678b9-e1cf-4ef9-bb8b-2fa083b81725', streamed_output: [ '', 'The', ' powerhouse', ' of', ' the', ' cell', ' is', ' the', ' mitochond', 'ria', '.', '' ], final_output: { output: 'The powerhouse of the cell is the mitochondria.' }, logs: { RunnableMap: { id: 'ff268fa1-a621-41b5-a832-4f23eae99d8e', name: 'RunnableMap', type: 'chain', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:33.851Z', streamed_output_str: [], final_output: [Object], end_time: '2024-01-04T20:21:35.000Z' }, RunnablePassthrough: { id: '62b54982-edb3-4101-a53e-1d4201230668', name: 'RunnablePassthrough', type: 'chain', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:34.073Z', streamed_output_str: [], final_output: [Object], end_time: '2024-01-04T20:21:34.226Z' }, RunnableSequence: { id: 'a8893fb5-63ec-4b13-bb49-e6d4435cc5e4', name: 'RunnableSequence', type: 'chain', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:34.074Z', streamed_output_str: [], final_output: [Object], end_time: '2024-01-04T20:21:34.893Z' }, VectorStoreRetriever: { id: 'd145704c-64bb-491d-9a2c-814ee3d1e6a2', name: 'VectorStoreRetriever', type: 'retriever', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:34.234Z', streamed_output_str: [], final_output: [Object], end_time: '2024-01-04T20:21:34.518Z' }, RunnableLambda: { id: 'a23a552a-b96f-4c07-a45d-c5f3861fad5d', name: 'RunnableLambda', type: 'chain', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:34.610Z', streamed_output_str: [], final_output: [Object], end_time: '2024-01-04T20:21:34.785Z' }, ChatPromptTemplate: { id: 'a5e8439e-a6e4-4cf3-ba17-c223ea874a0a', name: 'ChatPromptTemplate', type: 'prompt', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:35.097Z', streamed_output_str: [], final_output: [ChatPromptValue], end_time: '2024-01-04T20:21:35.193Z' }, ChatOpenAI: { id: 'd9c9d340-ea38-4ef4-a8a8-60f52da4e838', name: 'ChatOpenAI', type: 'llm', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:35.282Z', streamed_output_str: [Array], final_output: [Object], end_time: '2024-01-04T20:21:36.059Z' }, StrOutputParser: { id: 'c55f9f3f-048b-43d5-ba48-02f3b24b8f96', name: 'StrOutputParser', type: 'parser', tags: [Array], metadata: {}, start_time: '2024-01-04T20:21:35.842Z', streamed_output_str: [], final_output: [Object], end_time: '2024-01-04T20:21:36.157Z' } }} */
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [formatDocumentsAsString](https://api.js.langchain.com/functions/langchain_util_document.formatDocumentsAsString.html) from `langchain/util/document`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnablePassthrough](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [HumanMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts`
* [SystemMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.SystemMessagePromptTemplate.html) from `@langchain/core/prompts`
Stream events[](#stream-events "Direct link to Stream events")
---------------------------------------------------------------
Event Streaming is a **beta** API, and may change a bit based on feedback. It provides a way to stream both intermediate steps and final output from the chain.
Note: Introduced in `@langchain/core` 0.1.27
For now, when using the `streamEvents` API, for everything to work properly please:
* Any custom functions / runnables must propragate callbacks
* Set proper parameters on models to force the LLM to stream tokens.
### Event Reference[](#event-reference "Direct link to Event Reference")
Here is a reference table that shows some events that might be emitted by the various Runnable objects. Definitions for some of the Runnable are included after the table.
⚠️ When streaming the inputs for the runnable will not be available until the input stream has been entirely consumed This means that the inputs will be available at for the corresponding `end` hook rather than `start` event.
event
name
chunk
input
output
on\_chat\_model\_start
\[model name\]
{"messages": \[\[SystemMessage, HumanMessage\]\]}
on\_chat\_model\_stream
\[model name\]
AIMessageChunk(content="hello")
on\_chat\_model\_end
\[model name\]
{"messages": \[\[SystemMessage, HumanMessage\]\]}
{"generations": \[...\], "llm\_output": None, ...}
on\_llm\_start
\[model name\]
{'input': 'hello'}
on\_llm\_stream
\[model name\]
'Hello'
on\_llm\_end
\[model name\]
'Hello human!'
on\_chain\_start
format\_docs
on\_chain\_stream
format\_docs
"hello world!, goodbye world!"
on\_chain\_end
format\_docs
\[Document(...)\]
"hello world!, goodbye world!"
on\_tool\_start
some\_tool
{"x": 1, "y": "2"}
on\_tool\_stream
some\_tool
{"x": 1, "y": "2"}
on\_tool\_end
some\_tool
{"x": 1, "y": "2"}
on\_retriever\_start
\[retriever name\]
{"query": "hello"}
on\_retriever\_chunk
\[retriever name\]
{documents: \[...\]}
on\_retriever\_end
\[retriever name\]
{"query": "hello"}
{documents: \[...\]}
on\_prompt\_start
\[template\_name\]
{"question": "hello"}
on\_prompt\_end
\[template\_name\]
{"question": "hello"}
ChatPromptValue(messages: \[SystemMessage, ...\])
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";import { pull } from "langchain/hub";import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";// Define the tools the agent will have access to.const tools = [new TavilySearchResults({})];const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0, streaming: true,});// Get the prompt to use - you can modify this!// If you want to see the prompt in full, you can at:// https://smith.langchain.com/hub/hwchase17/openai-functions-agentconst prompt = await pull<ChatPromptTemplate>( "hwchase17/openai-functions-agent");const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools,}).withConfig({ runName: "Agent" });const eventStream = await agentExecutor.streamEvents( { input: "what is the weather in SF", }, { version: "v1" });for await (const event of eventStream) { const eventType = event.event; if (eventType === "on_chain_start") { // Was assigned when creating the agent with `.withConfig({"runName": "Agent"})` above if (event.name === "Agent") { console.log("\n-----"); console.log( `Starting agent: ${event.name} with input: ${JSON.stringify( event.data.input )}` ); } } else if (eventType === "on_chain_end") { // Was assigned when creating the agent with `.withConfig({"runName": "Agent"})` above if (event.name === "Agent") { console.log("\n-----"); console.log(`Finished agent: ${event.name}\n`); console.log(`Agent output was: ${event.data.output}`); console.log("\n-----"); } } else if (eventType === "on_llm_stream") { const content = event.data?.chunk?.message?.content; // Empty content in the context of OpenAI means // that the model is asking for a tool to be invoked via function call. // So we only print non-empty content if (content !== undefined && content !== "") { console.log(`| ${content}`); } } else if (eventType === "on_tool_start") { console.log("\n-----"); console.log( `Starting tool: ${event.name} with inputs: ${event.data.input}` ); } else if (eventType === "on_tool_end") { console.log("\n-----"); console.log(`Finished tool: ${event.name}\n`); console.log(`Tool output was: ${event.data.output}`); console.log("\n-----"); }}
#### API Reference:
* [TavilySearchResults](https://api.js.langchain.com/classes/langchain_community_tools_tavily_search.TavilySearchResults.html) from `@langchain/community/tools/tavily_search`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [pull](https://api.js.langchain.com/functions/langchain_hub.pull.html) from `langchain/hub`
* [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createOpenAIFunctionsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIFunctionsAgent.html) from `langchain/agents`
-----Starting agent: Agent with input: {"input":"what is the weather in SF"}-----Starting tool: TavilySearchResults with inputs: weather in San Francisco-----Finished tool: TavilySearchResultsTool output was: [{"title":"Weather in San Francisco","url":"https://www.weatherapi.com/","content":"Weather in San Francisco is {'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1707638479, 'localtime': '2024-02-11 0:01'}, 'current': {'last_updated_epoch': 1707638400, 'last_updated': '2024-02-11 00:00', 'temp_c': 11.1, 'temp_f': 52.0, 'is_day': 0, 'condition': {'text': 'Partly cloudy', 'icon': '//cdn.weatherapi.com/weather/64x64/night/116.png', 'code': 1003}, 'wind_mph': 9.4, 'wind_kph': 15.1, 'wind_degree': 270, 'wind_dir': 'W', 'pressure_mb': 1022.0, 'pressure_in': 30.18, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 83, 'cloud': 25, 'feelslike_c': 11.5, 'feelslike_f': 52.6, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 1.0, 'gust_mph': 13.9, 'gust_kph': 22.3}}","score":0.98371,"raw_content":null},{"title":"San Francisco, California November 2024 Weather Forecast","url":"https://www.weathertab.com/en/c/e/11/united-states/california/san-francisco/","content":"Temperature Forecast Temperature Forecast Normal Avg High Temps 60 to 70 °F Avg Low Temps 45 to 55 °F Weather Forecast Legend WeatherTAB helps you plan activities on days with the least risk of rain. Our forecasts are not direct predictions of rain/snow. Not all risky days will have rain/snow.","score":0.9517,"raw_content":null},{"title":"Past Weather in San Francisco, California, USA — Yesterday or Further Back","url":"https://www.timeanddate.com/weather/usa/san-francisco/historic","content":"Past Weather in San Francisco, California, USA — Yesterday and Last 2 Weeks. Weather. Time Zone. DST Changes. Sun & Moon. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 52 °F. Light rain. Overcast.","score":0.945,"raw_content":null},{"title":"San Francisco, California February 2024 Weather Forecast - detailed","url":"https://www.weathertab.com/en/g/e/02/united-states/california/san-francisco/","content":"Free Long Range Weather Forecast for San Francisco, California February 2024. Detailed graphs of monthly weather forecast, temperatures, and degree days.","score":0.92177,"raw_content":null},{"title":"San Francisco Weather in 2024 - extremeweatherwatch.com","url":"https://www.extremeweatherwatch.com/cities/san-francisco/year-2024","content":"Year: What's the hottest temperature in San Francisco so far this year? As of February 2, the highest temperature recorded in San Francisco, California in 2024 is 73 °F which happened on January 29. Highest Temperatures: All-Time By Year Highest Temperatures in San Francisco in 2024 What's the coldest temperature in San Francisco so far this year?","score":0.91598,"raw_content":null}]-----| The| current| weather| in| San| Francisco| is| partly| cloudy| with| a| temperature| of|| 52| .| 0| °F| (| 11| .| 1| °C| ).| The| wind| speed| is|| 15| .| 1| k| ph| coming| from| the| west| ,| and| the| humidity| is| at|| 83| %.| If| you| need| more| detailed| information| ,| you| can| visit| [| Weather| in| San| Francisco| ](| https| ://| www| .weather| api| .com| /| ).-----Finished agent: AgentAgent output was: The current weather in San Francisco is partly cloudy with a temperature of 52.0°F (11.1°C). The wind speed is 15.1 kph coming from the west, and the humidity is at 83%. If you need more detailed information, you can visit [Weather in San Francisco](https://www.weatherapi.com/).-----
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Why use LCEL?
](/v0.1/docs/expression_language/why/)[
Next
Streaming
](/v0.1/docs/expression_language/streaming/)
* [Stream](#stream)
* [Invoke](#invoke)
* [Batch](#batch)
* [Stream log](#stream-log)
* [Stream events](#stream-events)
* [Event Reference](#event-reference)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/streaming/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* Streaming
On this page
Streaming With LangChain
========================
Streaming is critical in making applications based on LLMs feel responsive to end-users.
Important LangChain primitives like LLMs, parsers, prompts, retrievers, and agents implement the LangChain Runnable Interface.
This interface provides two general approaches to stream content:
* `.stream()`: a default implementation of streaming that streams the final output from the chain.
* `streamEvents()` and `streamLog()`: these provide a way to stream both intermediate steps and final output from the chain.
Let’s take a look at both approaches!
Using Stream
============
All `Runnable` objects implement a method called stream.
These methods are designed to stream the final output in chunks, yielding each chunk as soon as it is available.
Streaming is only possible if all steps in the program know how to process an **input stream**; i.e., process an input chunk one at a time, and yield a corresponding output chunk.
The complexity of this processing can vary, from straightforward tasks like emitting tokens produced by an LLM, to more challenging ones like streaming parts of JSON results before the entire JSON is complete.
The best place to start exploring streaming is with the single most important components in LLM apps – the models themselves!
LLMs and Chat Models[](#llms-and-chat-models "Direct link to LLMs and Chat Models")
------------------------------------------------------------------------------------
Large language models can take several seconds to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.
The key strategy to make the application feel more responsive is to show intermediate progress; e.g., to stream the output from the model token by token.
import "dotenv/config";
[Module: null prototype] { default: {} }
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const model = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const model = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
const stream = await model.stream("Hello! Tell me about yourself.");const chunks = [];for await (const chunk of stream) { chunks.push(chunk); console.log(`${chunk.content}|`);}
|Hello|!| I|'m| an| AI| language| model| developed| by| Open|AI|.| I|'m| designed| to| assist| with| a| wide| range| of| tasks| and| topics|,| from| answering| questions| and| engaging| in| conversations|,| to| helping| with| writing| and| providing| information| on| various| subjects|.| I| don|'t| have| personal| experiences| or| emotions|,| as| I|'m| just| a| computer| program|,| but| I|'m| here| to| help| and| provide| information| to| the| best| of| my| abilities|.| Is| there| something| specific| you|'d| like| to| know| or| discuss|?||
Let’s have a look at one of the raw chunks:
chunks[0];
AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {}}
We got back something called an `AIMessageChunk`. This chunk represents a part of an `AIMessage`.
Message chunks are additive by design – one can simply add them up using the `.concat()` method to get the state of the response so far!
let finalChunk = chunks[0];for (const chunk of chunks.slice(1, 5)) { finalChunk = finalChunk.concat(chunk);}finalChunk;
AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Hello! I'm", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! I'm", name: undefined, additional_kwargs: {}}
Chains[](#chains "Direct link to Chains")
------------------------------------------
Virtually all LLM applications involve more steps than just a call to a language model.
Let’s build a simple chain using `LangChain Expression Language` (`LCEL`) that combines a prompt, model and a parser and verify that streaming works.
We will use `StringOutputParser` to parse the output from the model. This is a simple parser that extracts the content field from an `AIMessageChunk`, giving us the `token` returned by the model.
tip
LCEL is a declarative way to specify a “program” by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of stream, allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromTemplate("Tell me a joke about {topic}");const parser = new StringOutputParser();const chain = prompt.pipe(model).pipe(parser);const stream = await chain.stream({ topic: "parrot",});for await (const chunk of stream) { console.log(`${chunk}|`);}
|Sure|!| Here|'s| a| par|rot|-themed| joke| for| you|:|Why| did| the| par|rot| bring| a| ladder| to| the| party|?|Because| it| wanted| to| be| a| high| f|lier|!||
note
You do not have to use the `LangChain Expression Language` to use LangChain and can instead rely on a standard **imperative** programming approach by caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit.
If that works for your needs, then that’s fine by us 👌!
### Working with Input Streams[](#working-with-input-streams "Direct link to Working with Input Streams")
What if you wanted to stream JSON from the output as it was being generated?
If you were to rely on `JSON.parse` to parse the partial json, the parsing would fail as the partial json wouldn’t be valid json.
You’d likely be at a complete loss of what to do and claim that it wasn’t possible to stream JSON.
Well, turns out there is a way to do it - the parser needs to operate on the **input stream**, and attempt to “auto-complete” the partial json into a valid state.
Let’s see such a parser in action to understand what this means.
import { JsonOutputParser } from "@langchain/core/output_parsers";const chain = model.pipe(new JsonOutputParser());const stream = await chain.stream( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`);for await (const chunk of stream) { console.log(chunk);}
{ countries: [] }{ countries: [ { name: "" } ] }{ countries: [ { name: "France" } ] }{ countries: [ { name: "France", population: "" } ] }{ countries: [ { name: "France", population: "66" } ] }{ countries: [ { name: "France", population: "66," } ] }{ countries: [ { name: "France", population: "66,960" } ] }{ countries: [ { name: "France", population: "66,960," } ] }{ countries: [ { name: "France", population: "66,960,000" } ] }{ countries: [ { name: "France", population: "66,960,000" }, { name: "" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46," } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660," } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan", population: "" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan", population: "126" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan", population: "126," } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan", population: "126,500" } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan", population: "126,500," } ]}{ countries: [ { name: "France", population: "66,960,000" }, { name: "Spain", population: "46,660,000" }, { name: "Japan", population: "126,500,000" } ]}
Now, let’s **break** streaming. We’ll use the previous example and append an extraction function at the end that extracts the country names from the finalized JSON. Since this new last step is just a function call with no defined streaming behavior, the streaming output from previous steps is aggregated, then passed as a single input to the function.
danger
Any steps in the chain that operate on **finalized inputs** rather than on **input streams** can break streaming functionality via `stream`.
tip
Later, we will discuss the `streamEvents` API which streams results from intermediate steps. This API will stream results from intermediate steps even if the chain contains steps that only operate on **finalized inputs**.
// A function that operates on finalized inputs// rather than on an input_stream// A function that does not operates on input streams and breaks streaming.const extractCountryNames = (inputs: Record<string, any>) => { if (!Array.isArray(inputs.countries)) { return ""; } return JSON.stringify(inputs.countries.map((country) => country.name));};const chain = model.pipe(new JsonOutputParser()).pipe(extractCountryNames);const stream = await chain.stream( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`);for await (const chunk of stream) { console.log(chunk);}
["France","Spain","Japan"]
### Non-streaming components[](#non-streaming-components "Direct link to Non-streaming components")
Like the above example, some built-in components like Retrievers do not offer any streaming. What happens if we try to `stream` them?
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { ChatPromptTemplate } from "@langchain/core/prompts";const template = `Answer the question based only on the following context:{context}Question: {question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const vectorstore = await MemoryVectorStore.fromTexts( ["mitochondria is the powerhouse of the cell", "buildings are made of brick"], [{}, {}], new OpenAIEmbeddings());const retriever = vectorstore.asRetriever();const chunks = [];for await (const chunk of await retriever.stream( "What is the powerhouse of the cell?")) { chunks.push(chunk);}console.log(chunks);
[ [ Document { pageContent: "mitochondria is the powerhouse of the cell", metadata: {} }, Document { pageContent: "buildings are made of brick", metadata: {} } ]]
Stream just yielded the final result from that component.
This is OK! Not all components have to implement streaming – in some cases streaming is either unnecessary, difficult or just doesn’t make sense.
tip
An LCEL chain constructed using some non-streaming components will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.
Here’s an example of this:
import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import type { Document } from "@langchain/core/documents";import { StringOutputParser } from "@langchain/core/output_parsers";const formatDocs = (docs: Document[]) => { return docs.map((doc) => doc.pageContent).join("\n-----\n");};const retrievalChain = RunnableSequence.from([ { context: retriever.pipe(formatDocs), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);
const stream = await retrievalChain.stream( "What is the powerhouse of the cell?");for await (const chunk of stream) { console.log(`${chunk}|`);}
|The| powerhouse| of| the| cell| is| the| mitochond|ria|.||
Now that we’ve seen how the `stream` method works, let’s venture into the world of streaming events!
Using Stream Events[](#using-stream-events "Direct link to Using Stream Events")
---------------------------------------------------------------------------------
Event Streaming is a **beta** API. This API may change a bit based on feedback.
note
Introduced in @langchain/core **0.1.27**.
For the `streamEvents` method to work properly:
* Any custom functions / runnables must propragate callbacks
* Set proper parameters on models to force the LLM to stream tokens.
* Let us know if anything doesn’t work as expected!
### Event Reference[](#event-reference "Direct link to Event Reference")
Below is a reference table that shows some events that might be emitted by the various Runnable objects.
note
When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.
event
name
chunk
input
output
on\_llm\_start
\[model name\]
{‘input’: ‘hello’}
on\_llm\_stream
\[model name\]
‘Hello’ `or` AIMessageChunk(content=“hello”)
on\_llm\_end
\[model name\]
‘Hello human!’
{“generations”: \[…\], “llmOutput”: None, …}
on\_chain\_start
format\_docs
on\_chain\_stream
format\_docs
“hello world!, goodbye world!”
on\_chain\_end
format\_docs
\[Document(…)\]
“hello world!, goodbye world!”
on\_tool\_start
some\_tool
{“x”: 1, “y”: “2”}
on\_tool\_stream
some\_tool
{“x”: 1, “y”: “2”}
on\_tool\_end
some\_tool
{“x”: 1, “y”: “2”}
on\_retriever\_start
\[retriever name\]
{“query”: “hello”}
on\_retriever\_chunk
\[retriever name\]
{documents: \[…\]}
on\_retriever\_end
\[retriever name\]
{“query”: “hello”}
{documents: \[…\]}
on\_prompt\_start
\[template\_name\]
{“question”: “hello”}
on\_prompt\_end
\[template\_name\]
{“question”: “hello”}
ChatPromptValue(messages: \[SystemMessage, …\])
### Chat Model[](#chat-model "Direct link to Chat Model")
Let’s start off by looking at the events produced by a chat model.
const events = [];const eventStream = await model.streamEvents("hello", { version: "v1" });for await (const event of eventStream) { events.push(event);}
13
note
Hey what’s that funny version=“v1” parameter in the API?! 😾
This is a **beta API**, and we’re almost certainly going to make some changes to it.
This version parameter will allow us to mimimize such breaking changes to your code.
In short, we are annoying you now, so we don’t have to annoy you later.
Let’s take a look at the few of the start event and a few of the end events.
events.slice(0, 3);
[ { run_id: "ce08e556-e8e7-4bfb-b8c0-e51926fc9c0c", event: "on_llm_start", name: "ChatOpenAI", tags: [], metadata: {}, data: { input: "hello" } }, { event: "on_llm_stream", run_id: "ce08e556-e8e7-4bfb-b8c0-e51926fc9c0c", tags: [], metadata: {}, name: "ChatOpenAI", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {} } } }, { event: "on_llm_stream", run_id: "ce08e556-e8e7-4bfb-b8c0-e51926fc9c0c", tags: [], metadata: {}, name: "ChatOpenAI", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "Hello", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello", name: undefined, additional_kwargs: {} } } }]
events.slice(-2);
[ { event: "on_llm_stream", run_id: "ce08e556-e8e7-4bfb-b8c0-e51926fc9c0c", tags: [], metadata: {}, name: "ChatOpenAI", data: { chunk: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {} } } }, { event: "on_llm_end", name: "ChatOpenAI", run_id: "ce08e556-e8e7-4bfb-b8c0-e51926fc9c0c", tags: [], metadata: {}, data: { output: { generations: [ [Array] ] } } }]
### Chain[](#chain "Direct link to Chain")
Let’s revisit the example chain that parsed streaming JSON to explore the streaming events API.
const chain = model.pipe(new JsonOutputParser());const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" });const events = [];for await (const event of eventStream) { events.push(event);}
117
If you examine at the first few events, you’ll notice that there are **3** different start events rather than **2** start events.
The three start events correspond to:
1. The chain (model + parser)
2. The model
3. The parser
events.slice(0, 3);
[ { run_id: "c486d08d-b426-43c3-8fe0-a943db575133", event: "on_chain_start", name: "RunnableSequence", tags: [], metadata: {}, data: { input: "Output a list of the countries france, spain and japan and their populations in JSON format. Use a d"... 129 more characters } }, { event: "on_llm_start", name: "ChatOpenAI", run_id: "220e2e35-06d1-4db7-87a4-9c35643eee13", tags: [ "seq:step:1" ], metadata: {}, data: { input: { messages: [ [Array] ] } } }, { event: "on_parser_start", name: "JsonOutputParser", run_id: "34a7abe4-98ae-46ad-85ac-625e724468b1", tags: [ "seq:step:2" ], metadata: {}, data: {} }]
What do you think you’d see if you looked at the last 3 events? what about the middle?
Let’s use this API to take output the stream events from the model and the parser. We’re ignoring start events, end events and events from the chain.
let eventCount = 0;const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" });for await (const event of eventStream) { // Truncate the output if (eventCount > 30) { continue; } const eventType = event.event; if (eventType === "on_llm_stream") { console.log(`Chat model chunk: ${event.data.chunk.message.content}`); } else if (eventType === "on_parser_stream") { console.log(`Parser chunk: ${JSON.stringify(event.data.chunk)}`); } eventCount += 1;}
Chat model chunk:Chat model chunk: {"Chat model chunk: countriesChat model chunk: ":Parser chunk: {"countries":[]}Chat model chunk: [Chat model chunk:Chat model chunk: {"Chat model chunk: nameChat model chunk: ":Parser chunk: {"countries":[{"name":""}]}Chat model chunk: "Parser chunk: {"countries":[{"name":"fr"}]}Chat model chunk: frParser chunk: {"countries":[{"name":"france"}]}Chat model chunk: anceChat model chunk: ",Chat model chunk: "Chat model chunk: populationChat model chunk: ":Parser chunk: {"countries":[{"name":"france","population":""}]}Chat model chunk: "Parser chunk: {"countries":[{"name":"france","population":"67"}]}
Because both the model and the parser support streaming, we see streaming events from both components in real time! Neat! 🦜
### Filtering Events[](#filtering-events "Direct link to Filtering Events")
Because this API produces so many events, it is useful to be able to filter on events.
You can filter by either component `name`, component `tags` or component `type`.
#### By Name[](#by-name "Direct link to By Name")
const chain = model .withConfig({ runName: "model" }) .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" }));const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, { includeNames: ["my_parser"] });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1;}
{ event: "on_parser_start", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: {}}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: {} }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ {} ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "" } ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France" } ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France", population: "" } ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France", population: "67" } ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France", population: "67," } ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France", population: "67,081" } ] } }}{ event: "on_parser_stream", name: "my_parser", run_id: "c889ec6f-6050-40c2-8fdb-c24ab88606c3", tags: [ "seq:step:2" ], metadata: {}, data: { chunk: { countries: [ { name: "France", population: "67,081," } ] } }}
#### By type[](#by-type "Direct link to By type")
const chain = model .withConfig({ runName: "model" }) .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" }));const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, { includeTypes: ["llm"] });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1;}
{ event: "on_llm_start", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { input: { messages: [ [ [HumanMessage] ] ] } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "{\n", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "{\n", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "{\n", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " ", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " ", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " ", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: ' "', generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: ' "', additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: ' "', name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "countries", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "countries", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "countries", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: '":', generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: '":', additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: '":', name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " [\n", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " [\n", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " [\n", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " ", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " ", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " ", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " {\n", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " {\n", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " {\n", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "model", run_id: "0c525b62-0d00-461c-9d1e-1bd8b339e711", tags: [ "seq:step:1" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " ", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " ", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " ", name: undefined, additional_kwargs: {} } } }}
#### By Tags[](#by-tags "Direct link to By Tags")
caution
Tags are inherited by child components of a given runnable.
If you’re using tags to filter, make sure that this is what you want.
const chain = model .pipe(new JsonOutputParser().withConfig({ runName: "my_parser" })) .withConfig({ tags: ["my_chain"] });const eventStream = await chain.streamEvents( `Output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" }, { includeTags: ["my_chain"] });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 10) { continue; } console.log(event); eventCount += 1;}
{ run_id: "e7abe3de-2402-49f1-a9d7-622f6aa2f5b9", event: "on_chain_start", name: "RunnableSequence", tags: [ "my_chain" ], metadata: {}, data: { input: "Output a list of the countries france, spain and japan and their populations in JSON format. Use a d"... 129 more characters }}{ event: "on_llm_start", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { input: { messages: [ [ [HumanMessage] ] ] } }}{ event: "on_parser_start", name: "my_parser", run_id: "df3b3f2b-8b67-4eeb-9376-a21799475e8f", tags: [ "seq:step:2", "my_chain" ], metadata: {}, data: {}}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: {} } } }}{ event: "on_parser_stream", name: "my_parser", run_id: "df3b3f2b-8b67-4eeb-9376-a21799475e8f", tags: [ "seq:step:2", "my_chain" ], metadata: {}, data: { chunk: {} }}{ event: "on_chain_stream", run_id: "e7abe3de-2402-49f1-a9d7-622f6aa2f5b9", tags: [ "my_chain" ], metadata: {}, name: "RunnableSequence", data: { chunk: {} }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "{\n", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "{\n", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "{\n", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: " ", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: " ", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: " ", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: ' "', generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: ' "', additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: ' "', name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: "countries", generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: "countries", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "countries", name: undefined, additional_kwargs: {} } } }}{ event: "on_llm_stream", name: "ChatOpenAI", run_id: "4bc4598c-3bf9-44d2-9c30-f9c635875b31", tags: [ "seq:step:1", "my_chain" ], metadata: {}, data: { chunk: ChatGenerationChunk { text: '":', generationInfo: { prompt: 0, completion: 0 }, message: AIMessageChunk { lc_serializable: true, lc_kwargs: { content: '":', additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: '":', name: undefined, additional_kwargs: {} } } }}
### Non-streaming components[](#non-streaming-components-1 "Direct link to Non-streaming components")
Remember how some components don’t stream well because they don’t operate on **input streams**?
While such components can break streaming of the final output when using `stream`, `streamEvents` will still yield streaming events from intermediate steps that support streaming!
// A function that operates on finalized inputs// rather than on an input_stream// A function that does not operates on input streams and breaks streaming.const extractCountryNames = (inputs: Record<string, any>) => { if (!Array.isArray(inputs.countries)) { return ""; } return JSON.stringify(inputs.countries.map((country) => country.name));};const chain = model.pipe(new JsonOutputParser()).pipe(extractCountryNames);const stream = await chain.stream( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`);for await (const chunk of stream) { console.log(chunk);}
["France","Spain","Japan"]
As expected, the `stream` API doesn’t work correctly because `extractCountryNames` doesn’t operate on streams.
Now, let’s confirm that with `streamEvents` we’re still seeing streaming output from the model and the parser.
const eventStream = await chain.streamEvents( `output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key "name" and "population"`, { version: "v1" });let eventCount = 0;for await (const event of eventStream) { // Truncate the output if (eventCount > 30) { continue; } const eventType = event.event; if (eventType === "on_llm_stream") { console.log(`Chat model chunk: ${event.data.chunk.message.content}`); } else if (eventType === "on_parser_stream") { console.log(`Parser chunk: ${JSON.stringify(event.data.chunk)}`); } eventCount += 1;}
Chat model chunk:Parser chunk: {}Chat model chunk: {Chat model chunk:Chat model chunk: "Chat model chunk: countriesChat model chunk: ":Parser chunk: {"countries":[]}Chat model chunk: [Chat model chunk:Parser chunk: {"countries":[{}]}Chat model chunk: {Chat model chunk:Chat model chunk: "Chat model chunk: nameChat model chunk: ":Parser chunk: {"countries":[{"name":""}]}Chat model chunk: "Parser chunk: {"countries":[{"name":"France"}]}Chat model chunk: FranceChat model chunk: ",Chat model chunk:Chat model chunk: "Chat model chunk: populationChat model chunk: ":Parser chunk: {"countries":[{"name":"France","population":""}]}Chat model chunk: "
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Interface
](/v0.1/docs/expression_language/interface/)[
Next
Route between multiple runnables
](/v0.1/docs/expression_language/how_to/routing/)
* [LLMs and Chat Models](#llms-and-chat-models)
* [Chains](#chains)
* [Working with Input Streams](#working-with-input-streams)
* [Non-streaming components](#non-streaming-components)
* [Using Stream Events](#using-stream-events)
* [Event Reference](#event-reference)
* [Chat Model](#chat-model)
* [Chain](#chain)
* [Filtering Events](#filtering-events)
* [Non-streaming components](#non-streaming-components-1)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/how_to/routing/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Route between multiple runnables](/v0.1/docs/expression_language/how_to/routing/)
* [Cancelling requests](/v0.1/docs/expression_language/how_to/cancellation/)
* [Use RunnableMaps](/v0.1/docs/expression_language/how_to/map/)
* [Add message history (memory)](/v0.1/docs/expression_language/how_to/message_history/)
* [Add message history (memory)](/v0.1/docs/expression_language/how_to/with_history/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* How to
* Route between multiple runnables
On this page
Route between multiple Runnables
================================
This notebook covers how to do routing in the LangChain Expression Language.
Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs.
There are two ways to perform routing:
1. Using a RunnableBranch.
2. Writing custom factory function that takes the input of a previous step and returns a runnable. Importantly, this should return a runnable and NOT actually execute.
We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about LangChain, Anthropic, or Other, then routes to a corresponding prompt chain.
Using a RunnableBranch[](#using-a-runnablebranch "Direct link to Using a RunnableBranch")
------------------------------------------------------------------------------------------
A RunnableBranch is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input.
If no provided conditions match, it runs the default runnable.
Here's an example of what it looks like in action:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableBranch, RunnableSequence } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const promptTemplate = PromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`. Do not respond with more than one word.<question>{question}</question>Classification:`);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const classificationChain = RunnableSequence.from([ promptTemplate, model, new StringOutputParser(),]);const classificationChainResult = await classificationChain.invoke({ question: "how do I call Anthropic?",});console.log(classificationChainResult);/* Anthropic*/const langChainChain = PromptTemplate.fromTemplate( `You are an expert in langchain.Always answer questions starting with "As Harrison Chase told me".Respond to the following question:Question: {question}Answer:`).pipe(model);const anthropicChain = PromptTemplate.fromTemplate( `You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:`).pipe(model);const generalChain = PromptTemplate.fromTemplate( `Respond to the following question:Question: {question}Answer:`).pipe(model);const branch = RunnableBranch.from([ [ (x: { topic: string; question: string }) => x.topic.toLowerCase().includes("anthropic"), anthropicChain, ], [ (x: { topic: string; question: string }) => x.topic.toLowerCase().includes("langchain"), langChainChain, ], generalChain,]);const fullChain = RunnableSequence.from([ { topic: classificationChain, question: (input: { question: string }) => input.question, }, branch,]);const result1 = await fullChain.invoke({ question: "how do I use Anthropic?",});console.log(result1);/* AIMessage { content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' + '\n' + "First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" + '\n' + "Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" + '\n' + "You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" + '\n' + 'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' + '\n' + 'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.', additional_kwargs: {} }*/const result2 = await fullChain.invoke({ question: "how do I use LangChain?",});console.log(result2);/* AIMessage { content: ' As Harrison Chase told me, here is how you use LangChain:\n' + '\n' + 'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' + '\n' + 'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' + '\n' + 'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' + '\n' + "Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" + '\n' + 'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' + '\n' + 'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.', additional_kwargs: {} }*/const result3 = await fullChain.invoke({ question: "what is 2 + 2?",});console.log(result3);/* AIMessage { content: ' 4', additional_kwargs: {} }*/
#### API Reference:
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableBranch](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableBranch.html) from `@langchain/core/runnables`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Using a custom function[](#using-a-custom-function "Direct link to Using a custom function")
---------------------------------------------------------------------------------------------
You can also use a custom function to route between different outputs. Here's an example:
import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";import { ChatAnthropic } from "@langchain/anthropic";const promptTemplate = PromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`. Do not respond with more than one word.<question>{question}</question>Classification:`);const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const classificationChain = RunnableSequence.from([ promptTemplate, model, new StringOutputParser(),]);const classificationChainResult = await classificationChain.invoke({ question: "how do I call Anthropic?",});console.log(classificationChainResult);/* Anthropic*/const langChainChain = PromptTemplate.fromTemplate( `You are an expert in langchain.Always answer questions starting with "As Harrison Chase told me".Respond to the following question:Question: {question}Answer:`).pipe(model);const anthropicChain = PromptTemplate.fromTemplate( `You are an expert in anthropic. \Always answer questions starting with "As Dario Amodei told me". \Respond to the following question:Question: {question}Answer:`).pipe(model);const generalChain = PromptTemplate.fromTemplate( `Respond to the following question:Question: {question}Answer:`).pipe(model);const route = ({ topic }: { input: string; topic: string }) => { if (topic.toLowerCase().includes("anthropic")) { return anthropicChain; } else if (topic.toLowerCase().includes("langchain")) { return langChainChain; } else { return generalChain; }};const fullChain = RunnableSequence.from([ { topic: classificationChain, question: (input: { question: string }) => input.question, }, route,]);const result1 = await fullChain.invoke({ question: "how do I use Anthropic?",});console.log(result1);/* AIMessage { content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' + '\n' + "First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" + '\n' + "Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" + '\n' + "You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" + '\n' + 'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' + '\n' + 'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.', additional_kwargs: {} }*/const result2 = await fullChain.invoke({ question: "how do I use LangChain?",});console.log(result2);/* AIMessage { content: ' As Harrison Chase told me, here is how you use LangChain:\n' + '\n' + 'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' + '\n' + 'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' + '\n' + 'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' + '\n' + "Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" + '\n' + 'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' + '\n' + 'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.', additional_kwargs: {} }*/const result3 = await fullChain.invoke({ question: "what is 2 + 2?",});console.log(result3);/* AIMessage { content: ' 4', additional_kwargs: {} }*/
#### API Reference:
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Streaming
](/v0.1/docs/expression_language/streaming/)[
Next
Cancelling requests
](/v0.1/docs/expression_language/how_to/cancellation/)
* [Using a RunnableBranch](#using-a-runnablebranch)
* [Using a custom function](#using-a-custom-function)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/expression_language/cookbook/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [Prompt + LLM](/v0.1/docs/expression_language/cookbook/prompt_llm_parser/)
* [Multiple chains](/v0.1/docs/expression_language/cookbook/multiple_chains/)
* [Retrieval augmented generation (RAG)](/v0.1/docs/expression_language/cookbook/retrieval/)
* [Querying a SQL DB](/v0.1/docs/expression_language/cookbook/sql_db/)
* [Adding memory](/v0.1/docs/expression_language/cookbook/adding_memory/)
* [Using tools](/v0.1/docs/expression_language/cookbook/tools/)
* [Agents](/v0.1/docs/expression_language/cookbook/agents/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* Cookbook
Cookbook
========
Example code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the [Prompt + LLM](/v0.1/docs/expression_language/cookbook/prompt_llm_parser/) page is a good place to start.
Several pages in this section include embedded interactive screencasts from [Scrimba](https://scrimba.com). They're a great resource for getting started - you can edit the included code whenever you want, just as if you were pair programming with a teacher!
[
📄️ Prompt + LLM
----------------
One of the most foundational Expression Language compositions is taking:
](/v0.1/docs/expression_language/cookbook/prompt_llm_parser/)
[
📄️ Multiple chains
-------------------
Runnables can be used to combine multiple Chains together:
](/v0.1/docs/expression_language/cookbook/multiple_chains/)
[
📄️ Retrieval augmented generation (RAG)
----------------------------------------
Let's now look at adding in a retrieval step to a prompt and an LLM, which adds up to a "retrieval-augmented generation" chain:
](/v0.1/docs/expression_language/cookbook/retrieval/)
[
📄️ Querying a SQL DB
---------------------
We can replicate our SQLDatabaseChain with Runnables.
](/v0.1/docs/expression_language/cookbook/sql_db/)
[
📄️ Adding memory
-----------------
This shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook them up manually.
](/v0.1/docs/expression_language/cookbook/adding_memory/)
[
📄️ Using tools
---------------
Tools are also runnables, and can therefore be used within a chain:
](/v0.1/docs/expression_language/cookbook/tools/)
[
📄️ Agents
----------
You can pass a Runnable into an agent.
](/v0.1/docs/expression_language/cookbook/agents/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Add message history (memory)
](/v0.1/docs/expression_language/how_to/with_history/)[
Next
Prompt + LLM
](/v0.1/docs/expression_language/cookbook/prompt_llm_parser/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* Modules
On this page
Modules
=======
LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
#### [Model I/O](/v0.1/docs/modules/model_io/)[](#model-io "Direct link to model-io")
Interface with language models
#### [Data connection](/v0.1/docs/modules/data_connection/)[](#data-connection "Direct link to data-connection")
Interface with application-specific data
#### [Chains](/v0.1/docs/modules/chains/)[](#chains "Direct link to chains")
Construct sequences of calls
#### [Agents](/v0.1/docs/modules/agents/)[](#agents "Direct link to agents")
Let chains choose which tools to use given high-level directives
#### [Memory](/v0.1/docs/modules/memory/)[](#memory "Direct link to memory")
Persist application state between runs of a chain
#### [Callbacks](/v0.1/docs/modules/callbacks/)[](#callbacks "Direct link to callbacks")
Log and stream intermediate steps of any chain
#### [Experimental](/v0.1/docs/modules/experimental/)[](#experimental "Direct link to experimental")
Experimental modules whose abstractions have not fully settled
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
LangChain Expression Language (LCEL)
](/v0.1/docs/expression_language/)[
Next
Model I/O
](/v0.1/docs/modules/model_io/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* Model I/O
On this page
Model I/O
=========
The core element of any language model application is... the model. LangChain gives you the building blocks to interface with any language model.
![model_io_diagram](/v0.1/assets/images/model_io-1f23a36233d7731e93576d6885da2750.jpg)
[Conceptual Guide](/v0.1/docs/modules/model_io/concepts/)[](#conceptual-guide "Direct link to conceptual-guide")
-----------------------------------------------------------------------------------------------------------------
A conceptual explanation of messages, prompts, LLMs vs ChatModels, and output parsers. You should read [this section](/v0.1/docs/modules/model_io/concepts/) before getting started.
[Quick Start](/v0.1/docs/modules/model_io/quick_start/)[](#quick-start "Direct link to quick-start")
-----------------------------------------------------------------------------------------------------
Covers the basics of getting started working with different types of models. You should walk through [this section](/v0.1/docs/modules/model_io/quick_start/) if you want to get an overview of the functionality.
[Prompts](/v0.1/docs/modules/model_io/prompts/)[](#prompts "Direct link to prompts")
-------------------------------------------------------------------------------------
[This section](/v0.1/docs/modules/model_io/prompts/) deep dives into the different types of prompt templates and how to use them.
[LLMs](/v0.1/docs/modules/model_io/llms/)[](#llms "Direct link to llms")
-------------------------------------------------------------------------
[This section](/v0.1/docs/modules/model_io/llms/) covers functionality related to the LLM class. This is a type of model that takes a text string as input and returns a text string.
[ChatModels](/v0.1/docs/modules/model_io/chat/)[](#chatmodels "Direct link to chatmodels")
-------------------------------------------------------------------------------------------
[This section](/v0.1/docs/modules/model_io/chat/) covers functionality related to the ChatModel class. This is a type of model that takes a list of messages as input and returns a message.
[Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)[](#output-parsers "Direct link to output-parsers")
-----------------------------------------------------------------------------------------------------------------
Output parsers are responsible for transforming the output of LLMs and ChatModels into more structured data. [This section](/v0.1/docs/modules/model_io/output_parsers/) covers the different types of output parsers.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Modules
](/v0.1/docs/modules/)[
Next
Quickstart
](/v0.1/docs/modules/model_io/quick_start/)
* [Conceptual Guide](#conceptual-guide)
* [Quick Start](#quick-start)
* [Prompts](#prompts)
* [LLMs](#llms)
* [ChatModels](#chatmodels)
* [Output Parsers](#output-parsers)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* Retrieval
Retrieval
=========
Many LLM applications require user-specific data that is not part of the model's training set. The primary way of accomplishing this is through Retrieval Augmented Generation (RAG). In this process, external data is _retrieved_ and then passed to the LLM when doing the _generation_ step.
LangChain provides all the building blocks for RAG applications - from simple to complex. This section of the documentation covers everything related to the _retrieval_ step - e.g. the fetching of the data. Although this sounds simple, it can be subtly complex. This encompasses several key modules.
![data_connection_diagram](/v0.1/assets/images/data_connection-c42d68c3d092b85f50d08d4cc171fc25.jpg)
**[Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)**
Load documents from many different sources. LangChain provides many different document loaders as well as integrations with other major providers in the space, such as Unstructured. We provide integrations to load all types of documents (html, PDF, code) from all types of locations (private s3 buckets, public websites).
**[Text Splitting](/v0.1/docs/modules/data_connection/document_transformers/)**
A key part of retrieval is fetching only the relevant parts of documents. This involves several transformation steps in order to best prepare the documents for retrieval. One of the primary ones here is splitting (or chunking) a large document into smaller chunks. LangChain provides several different algorithms for doing this, as well as logic optimized for specific document types (code, markdown, etc).
**[Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)**
Another key part of retrieval has become creating embeddings for documents. Embeddings capture the semantic meaning of text, allowing you to quickly and efficiently find other pieces of text that are similar. LangChain provides integrations with different embedding providers and methods, from open-source to proprietary API, allowing you to choose the one best suited for your needs. LangChain exposes a standard interface, allowing you to easily swap between models.
**[Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)**
With the rise of embeddings, there has emerged a need for databases to support efficient storage and searching of these embeddings. LangChain provides integrations with many different vectorstores, from open-source local ones to cloud-hosted proprietary ones, allowing you choose the one best suited for your needs. LangChain exposes a standard interface, allowing you to easily swap between vector stores.
**[Retrievers](/v0.1/docs/modules/data_connection/retrievers/)**
Once the data is in the database, you still need to retrieve it. LangChain supports many different retrieval algorithms and is one of the places where we add the most value. We support basic methods that are easy to get started - namely simple semantic search. However, we have also added a collection of algorithms on top of this to increase performance. These include:
* [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/): This allows you to create multiple embeddings per parent document, allowing you to look up smaller chunks but return larger context.
* [Self Query Retriever](/v0.1/docs/modules/data_connection/retrievers/self_query/): User questions often contain reference to something that isn't just semantic, but rather expresses some logic that can best be represented as a metadata filter. Self-query allows you to parse out the _semantic_ part of a query from other _metadata filters_ present in the query
* And more!
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
XML output parser
](/v0.1/docs/modules/model_io/output_parsers/types/xml/)[
Next
Document loaders
](/v0.1/docs/modules/data_connection/document_loaders/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/security/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* Security
On this page
Security
========
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
Best Practices[](#best-practices "Direct link to Best Practices")
------------------------------------------------------------------
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
Reporting a Vulnerability[](#reporting-a-vulnerability "Direct link to Reporting a Vulnerability")
---------------------------------------------------------------------------------------------------
Please report security vulnerabilities by email to [security@langchain.dev.](mailto:security@langchain.dev.) This will ensure the issue is promptly triaged and acted upon as needed.
Enterprise solutions[](#enterprise-solutions "Direct link to Enterprise solutions")
------------------------------------------------------------------------------------
LangChain offers enterprise solutions for customers who have additional security requirements. Please contact us at [sales@langchain.dev](mailto:sales@langchain.dev).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Experimental
](/v0.1/docs/modules/experimental/)[
Next
Guides
](/v0.1/docs/guides/)
* [Best Practices](#best-practices)
* [Reporting a Vulnerability](#reporting-a-vulnerability)
* [Enterprise solutions](#enterprise-solutions)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/chains/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* Chains
Chains
======
Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with [LCEL](/v0.1/docs/expression_language/).
LCEL is great for constructing your own chains, but it’s also nice to have chains that you can use off-the-shelf. There are two types of off-the-shelf chains that LangChain supports:
* Chains that are built with LCEL. In this case, LangChain offers a higher-level constructor method. However, all that is being done under the hood is constructing a chain with LCEL.
* \[Legacy\] Chains constructed by subclassing from a legacy Chain class. These chains do not use LCEL under the hood but are rather standalone classes.
We are working creating methods that create LCEL versions of all chains. We are doing this for a few reasons.
1. Chains constructed in this way are nice because if you want to modify the internals of a chain you can simply modify the LCEL.
2. These chains natively support streaming, async, and batch out of the box.
3. These chains automatically get observability at each step.
This page contains two lists. First, a list of all LCEL chain constructors. Second, a list of all legacy Chains.
LCEL Chains[](#lcel-chains "Direct link to LCEL Chains")
---------------------------------------------------------
Below is a table of all LCEL chain constructors. In addition, we report on:
#### Chain Constructor[](#chain-constructor "Direct link to Chain Constructor")
The constructor function for this chain. These are all methods that return LCEL runnables. We also link to the API documentation.
#### Function Calling[](#function-calling "Direct link to Function Calling")
Whether this requires OpenAI function calling.
#### Other Tools[](#other-tools "Direct link to Other Tools")
What other tools (if any) are used in this chain.
#### When to Use[](#when-to-use "Direct link to When to Use")
Our commentary on when to use this chain.
Chain Constructor
Function Calling
Other Tools
When to Use
[createStuffDocumentsChain](https://api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html)
This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using.
[createOpenAIFnRunnable](https://api.js.langchain.com/functions/langchain_chains_openai_functions.createOpenAIFnRunnable.html)
✅
If you want to use OpenAI function calling to OPTIONALLY structure an output response. You may pass in multiple functions for the chain to call, but it does not have to call it.
[createStructuredOutputRunnable](https://api.js.langchain.com/functions/langchain_chains_openai_functions.createStructuredOutputRunnable.html)
✅
If you want to use OpenAI function calling to FORCE the LLM to respond with a certain function. You may only pass in one function, and the chain will ALWAYS return this response.
[createHistoryAwareRetriever](https://api.js.langchain.com/functions/langchain_chains_history_aware_retriever.createHistoryAwareRetriever.html)
Retriever
This chain takes in conversation history and then uses that to generate a search query which is passed to the underlying retriever.
[createRetrievalChain](https://api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html)
Retriever
This chain takes in a user inquiry, which is then passed to the retriever to fetch relevant documents. Those documents (and original inputs) are then passed to an LLM to generate a response
Legacy Chains[](#legacy-chains "Direct link to Legacy Chains")
---------------------------------------------------------------
Below we report on the legacy chain types that exist. We will maintain support for these until we are able to create a LCEL alternative. We cover:
#### Chain[](#chain "Direct link to Chain")
Name of the chain, or name of the constructor method. If constructor method, this will return a Chain subclass.
#### Function Calling[](#function-calling-1 "Direct link to Function Calling")
Whether this requires OpenAI Function Calling.
#### Other Tools[](#other-tools-1 "Direct link to Other Tools")
Other tools used in the chain.
#### When to Use[](#when-to-use-1 "Direct link to When to Use")
Our commentary on when to use.
Chain
Function Calling
Other Tools
When to Use
[createOpenAPIChain](https://api.js.langchain.com/functions/langchain_chains.createOpenAPIChain.html)
OpenAPI Spec
Similar to APIChain, this chain is designed to interact with APIs. The main difference is this is optimized for ease of use with OpenAPI endpoints.
[ConversationalRetrievalQAChain](https://api.js.langchain.com/classes/langchain_chains.ConversationalRetrievalQAChain.html)
Retriever
This chain can be used to have **conversations** with a document. It takes in a question and (optional) previous conversation history. If there is previous conversation history, it uses an LLM to rewrite the conversation into a query to send to a retriever (otherwise it just uses the newest user input). It then fetches those documents and passes them (along with the conversation) to an LLM to respond.
[StuffDocumentsChain](https://api.js.langchain.com/classes/langchain_chains.StuffDocumentsChain.html)
This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using.
[MapReduceDocumentsChain](https://api.js.langchain.com/classes/langchain_chains.MapReduceDocumentsChain.html)
This chain first passes each document through an LLM, then reduces them using the ReduceDocumentsChain. Useful in the same situations as ReduceDocumentsChain, but does an initial LLM call before trying to reduce the documents.
[RefineDocumentsChain](https://api.js.langchain.com/classes/langchain_chains.RefineDocumentsChain.html)
This chain collapses documents by generating an initial answer based on the first document and then looping over the remaining documents to _refine_ its answer. This operates sequentially, so it cannot be parallelized. It is useful in similar situatations as MapReduceDocuments Chain, but for cases where you want to build up an answer by refining the previous answer (rather than parallelizing calls).
[ConstitutionalChain](https://api.js.langchain.com/classes/langchain_chains.ConstitutionalChain.html)
This chain answers, then attempts to refine its answer based on constitutional principles that are provided. Use this when you want to enforce that a chain's answer follows some principles.
[LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html)
This chain simply combines a prompt with an LLM and an output parser. The recommended way to do this is just to use LCEL.
[GraphCypherQAChain](https://api.js.langchain.com/classes/langchain_chains_graph_qa_cypher.GraphCypherQAChain.html)
A graph that works with Cypher query language
This chain constructs an Cypher query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond.
[createExtractionChain](https://api.js.langchain.com/functions/langchain_chains.createExtractionChain.html)
✅
Uses OpenAI Function calling to extract information from text.
[createExtractionChainFromZod](https://api.js.langchain.com/functions/langchain_chains.createExtractionChainFromZod.html)
✅
Uses OpenAI Function calling and a Zod schema to extract information from text.
[SqlDatabaseChain](https://api.js.langchain.com/classes/langchain_chains_sql_db.SqlDatabaseChain.html)
Answers questions by generating and running SQL queries for a provided database.
[LLMRouterChain](https://api.js.langchain.com/classes/langchain_chains.LLMRouterChain.html)
This chain uses an LLM to route between potential options.
[MultiPromptChain](https://api.js.langchain.com/classes/langchain_chains.MultiPromptChain.html)
This chain routes input between multiple prompts. Use this when you have multiple potential prompts you could use to respond and want to route to just one.
[MultiRetrievalQAChain](https://api.js.langchain.com/classes/langchain_chains.MultiRetrievalQAChain.html)
Retriever
This chain uses an LLM to route input questions to the appropriate retriever for question answering.
[loadQAChain](https://api.js.langchain.com/functions/langchain_chains.loadQAChain.html)
Retriever
Does question answering over documents you pass in, and cites it sources. Use this over RetrievalQAChain when you want to pass in the documents directly (rather than rely on a passed retriever to get them).
[APIChain](https://api.js.langchain.com/classes/langchain_chains.APIChain.html)
Requests Wrapper
This chain uses an LLM to convert a query into an API request, then executes that request, gets back a response, and then passes that request to an LLM to respond. Prefer `createOpenAPIChain` if you have a spec available.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Neo4j
](/v0.1/docs/modules/data_connection/experimental/graph_databases/neo4j/)[
Next
Agents
](/v0.1/docs/modules/agents/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/agents/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [Quick start](/v0.1/docs/modules/agents/quick_start/)
* [Concepts](/v0.1/docs/modules/agents/concepts/)
* [Agent Types](/v0.1/docs/modules/agents/agent_types/)
* [How-to](/v0.1/docs/modules/agents/how_to/custom_agent/)
* [Agents](/v0.1/docs/modules/agents/)
* [Tools](/v0.1/docs/modules/agents/tools/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* Agents
On this page
Agents
======
The core idea of agents is to use a language model to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
[Quick Start](/v0.1/docs/modules/agents/quick_start/)[](#quick-start "Direct link to quick-start")
---------------------------------------------------------------------------------------------------
For a quick start to working with agents, please check out [this getting started guide](/v0.1/docs/modules/agents/quick_start/). This covers basics like initializing an agent, creating tools, and adding memory.
[Concepts](/v0.1/docs/modules/agents/concepts/)[](#concepts "Direct link to concepts")
---------------------------------------------------------------------------------------
There are several key concepts to understand when building agents: Agents, AgentExecutor, Tools, Toolkits. For an in depth explanation, please check out [this conceptual guide](/v0.1/docs/modules/agents/concepts/).
[Agent Types](/v0.1/docs/modules/agents/agent_types/)[](#agent-types "Direct link to agent-types")
---------------------------------------------------------------------------------------------------
There are many different types of agents to use. For a overview of the different types and when to use them, please check out [this section](/v0.1/docs/modules/agents/agent_types/).
[Tools](/v0.1/docs/modules/agents/tools/)[](#tools "Direct link to tools")
---------------------------------------------------------------------------
Agents are only as good as the tools they have. For a comprehensive guide on tools, please see [this section](/v0.1/docs/modules/agents/tools/).
How To Guides[](#how-to-guides "Direct link to How To Guides")
---------------------------------------------------------------
Agents have a lot of related functionality! Check out various guides including:
* [Building a custom agent](/v0.1/docs/modules/agents/how_to/custom_agent/)
* [Streaming (of both intermediate steps and tokens)](/v0.1/docs/modules/agents/how_to/streaming/)
* [Building an agent that returns structured output](/v0.1/docs/modules/agents/how_to/agent_structured/)
* Lots of functionality around using AgentExecutor, including: [handling parsing errors](/v0.1/docs/modules/agents/how_to/handle_parsing_errors/), [returning intermediate steps](/v0.1/docs/modules/agents/how_to/intermediate_steps/), and [capping the max number of iterations](/v0.1/docs/modules/agents/how_to/max_iterations/).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Chains
](/v0.1/docs/modules/chains/)[
Next
Quick start
](/v0.1/docs/modules/agents/quick_start/)
* [Quick Start](#quick-start)
* [Concepts](#concepts)
* [Agent Types](#agent-types)
* [Tools](#tools)
* [How To Guides](#how-to-guides)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* Memory
On this page
\[Beta\] Memory
===============
Many LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly. A more complex system will need to have a world model that it is constantly updating, which allows it to do things like maintain information about entities and their relationships.
We call this ability to store information about past interactions "memory". LangChain provides utilities for adding memory to a system. These utilities can be used by themselves or incorporated seamlessly into a chain.
Most memory-related functionality in LangChain is marked as beta. This is for two reasons:
1. Most functionality (with some exceptions, see below) is not production ready.
2. Most functionality (with some exceptions, see below) works with Legacy chains, not the newer LCEL syntax.
The main exception to this is the ChatMessageHistory functionality. This functionality is largely production ready and does integrate with LCEL.
* [LCEL Runnables](/v0.1/docs/expression_language/how_to/message_history/): See these docs for an overview of how to use ChatMessageHistory with LCEL runnables.
* [Integrations](/v0.1/docs/integrations/chat_memory/): See these docs for an introduction to the various ChatMessageHistory integrations.
Introduction[](#introduction "Direct link to Introduction")
------------------------------------------------------------
A memory system needs to support two basic actions: reading and writing. Recall that every chain defines some core execution logic that expects certain inputs. Some of these inputs come directly from the user, but some of these inputs can come from memory. A chain will interact with its memory system twice in a given run.
1. AFTER receiving the initial user inputs but BEFORE executing the core logic, a chain will READ from its memory system and augment the user inputs.
2. AFTER executing the core logic but BEFORE returning the answer, a chain will WRITE the inputs and outputs of the current run to memory, so that they can be referred to in future runs.
![](/v0.1/assets/images/memory_diagram-0627c68230aa438f9b5419064d63cbbc.png)
Building memory into a system[](#building-memory-into-a-system "Direct link to Building memory into a system")
---------------------------------------------------------------------------------------------------------------
The two core design decisions in any memory system are:
* How state is stored
* How state is queried
### Storing: List of chat messages[](#storing-list-of-chat-messages "Direct link to Storing: List of chat messages")
Underlying any memory is a history of all chat interactions. Even if these are not all used directly, they need to be stored in some form. One of the key parts of the LangChain memory module is a series of integrations for storing these chat messages, from in-memory lists to persistent databases.
* [Chat message storage](/v0.1/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered.
### Querying: Data structures and algorithms on top of chat messages[](#querying-data-structures-and-algorithms-on-top-of-chat-messages "Direct link to Querying: Data structures and algorithms on top of chat messages")
Keeping a list of chat messages is fairly straight-forward. What is less straight-forward are the data structures and algorithms built on top of chat messages that serve a view of those messages that is most useful.
A very simple memory system might just return the most recent messages each run. A slightly more complex memory system might return a succinct summary of the past K messages. An even more sophisticated system might extract entities from stored messages and only return information about entities referenced in the current run.
Each application can have different requirements for how memory is queried. The memory module should make it easy to both get started with simple memory systems and write your own custom systems if needed.
* [Memory types](/v0.1/docs/modules/memory/types/): The various data structures and algorithms that make up the memory types LangChain supports
Get started[](#get-started "Direct link to Get started")
---------------------------------------------------------
Let's take a look at what Memory actually looks like in LangChain. Here we'll cover the basics of interacting with an arbitrary memory class.
Let's take a look at how to use `BufferMemory` in chains. `BufferMemory` is an extremely simple form of memory that just keeps a list of chat messages in a buffer and passes those into the prompt template:
import { BufferMemory } from "langchain/memory";import { HumanMessage, AIMessage } from "@langchain/core/messages";const memory = new BufferMemory();await memory.chatHistory.addMessage(new HumanMessage("Hi!"));await memory.chatHistory.addMessage(new AIMessage("What's up?"));
When using memory in a chain, there are a few key concepts to understand. Note that here we cover general concepts that are useful for most types of memory. Each individual memory type may have its own parameters and concepts.
### What variables get returned from memory[](#what-variables-get-returned-from-memory "Direct link to What variables get returned from memory")
Before going into the chain, various variables are read from memory. These have specific names which need to align with the variables the chain expects. You can see what these variables are by calling `await memory.loadMemoryVariables({})`. Note that the empty dictionary that we pass in is just a placeholder for real variables. If the memory type you are using is dependent upon the input variables, you may need to pass values here.
await memory.loadMemoryVariables({});// { history: "Human: Hi!\AI: What's up?" }
In this case, you can see that `loadMemoryVariables` returns a single key, `history`. This means that your chain (and likely your prompt) should expect an input named `history`. You can generally control this variable through parameters on the memory class. For example, if you want the memory variables to be returned under the key `chat_history` you can do:
const memory2 = new BufferMemory({ memoryKey: "chat_history",});await memory2.chatHistory.addMessage(new HumanMessage("Hi!"));await memory2.chatHistory.addMessage(new AIMessage("What's up?"));await memory2.loadMemoryVariables({});// { chat_history: "Human: Hi!\AI: What's up?" }
The parameter name to control these keys may vary per memory type, but it's important to understand that (1) this is controllable, and (2) how to control it.
### Whether memory is a string or a list of messages[](#whether-memory-is-a-string-or-a-list-of-messages "Direct link to Whether memory is a string or a list of messages")
One of the most common types of memory involves returning a list of chat messages. These can either be returned as a single string, all concatenated together (useful when they will be passed into LLMs) or a list of ChatMessages (useful when passed into ChatModels).
By default, they are returned as a single string. In order to return as a list of messages, you can set `returnMessages` to `true`.
const messageMemory = new BufferMemory({ returnMessages: true,});await messageMemory.chatHistory.addMessage(new HumanMessage("Hi!"));await messageMemory.chatHistory.addMessage(new AIMessage("What's up?"));await messageMemory.loadMemoryVariables({});/* { history: [ HumanMessage { content: 'Hi!', additional_kwargs: {} }, AIMessage { content: "What's up?", additional_kwargs: {} } ] }*/
### What keys are saved to memory[](#what-keys-are-saved-to-memory "Direct link to What keys are saved to memory")
Often times chains take in or return multiple input/output keys. In these cases, how can we know which keys we want to save to the chat message history? This is generally controllable by `inputKey` and `outputKey` parameters on the memory types. These default to None - and if there is only one input/output key the class will default to just use that key. However, if there are multiple input/output keys then you MUST specify the name of which one to use.
### End to end example[](#end-to-end-example "Direct link to End to end example")
Finally, let's take a look at using this in a chain. We'll use an `LLMChain`, and show working with both an LLM and a ChatModel.
#### Using an LLM[](#using-an-llm "Direct link to Using an LLM")
import { OpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { LLMChain } from "langchain/chains";const llm = new OpenAI({ temperature: 0 });// Notice that a "chat_history" variable is present in the prompt templateconst template = `You are a nice chatbot having a conversation with a human.Previous conversation:{chat_history}New human question: {question}Response:`;const prompt = PromptTemplate.fromTemplate(template);// Notice that we need to align the `memoryKey` with the variable in the promptconst llmMemory = new BufferMemory({ memoryKey: "chat_history" });const conversationChain = new LLMChain({ llm, prompt, verbose: true, memory: llmMemory,});// Notice that we just pass in the `question` variable.// `chat_history` gets populated by the memory classawait conversationChain.invoke({ question: "What is your name?" });await conversationChain.invoke({ question: "What did I just ask you?" });
// { text: ' My name is OpenAI. What is your name?' }// { text: ' You just asked me what my name is.' }
#### Using a chat model[](#using-a-chat-model "Direct link to Using a chat model")
import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const chatModel = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromMessages([ ["system", "You are a nice chatbot having a conversation with a human."], // The variable name here is what must align with memory new MessagesPlaceholder("chat_history"), ["human", "{question}"],]);// Notice that we set `returnMessages: true` to return raw chat messages that are inserted// into the MessagesPlaceholder.// Additionally, note that `"chat_history"` aligns with the MessagesPlaceholder name.const chatPromptMemory = new BufferMemory({ memoryKey: "chat_history", returnMessages: true,});const chatConversationChain = new LLMChain({ llm: chatModel, prompt: chatPrompt, verbose: true, memory: chatPromptMemory,});// Notice that we just pass in the `question` variables - `chat_history` gets populated by memoryawait chatConversationChain.invoke({ question: "What is your name?" });await chatConversationChain.invoke({ question: "What did I just ask you?" });
// { text: "Hello! I'm an AI chatbot, so I don't have a personal name. You can just call me Chatbot. How can I assist you today?" }// { text: 'You just asked me what my name is.' }
Next steps[](#next-steps "Direct link to Next steps")
------------------------------------------------------
And that's it for getting started! Please see the other sections for walkthroughs of more advanced topics.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Vector stores as tools
](/v0.1/docs/modules/agents/tools/how_to/agents_with_vectorstores/)[
Next
\[Beta\] Memory
](/v0.1/docs/modules/memory/)
* [Introduction](#introduction)
* [Building memory into a system](#building-memory-into-a-system)
* [Storing: List of chat messages](#storing-list-of-chat-messages)
* [Querying: Data structures and algorithms on top of chat messages](#querying-data-structures-and-algorithms-on-top-of-chat-messages)
* [Get started](#get-started)
* [What variables get returned from memory](#what-variables-get-returned-from-memory)
* [Whether memory is a string or a list of messages](#whether-memory-is-a-string-or-a-list-of-messages)
* [What keys are saved to memory](#what-keys-are-saved-to-memory)
* [End to end example](#end-to-end-example)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* Guides
Guides
======
Design guides for key parts of the development process
[
📄️ Debugging
-------------
If you're building with LLMs, at some point something will break, and you'll need to debug.
](/v0.1/docs/guides/debugging/)
[
🗃️ Deployment
--------------
2 items
](/v0.1/docs/guides/deployment/)
[
🗃️ Evaluation
--------------
4 items
](/v0.1/docs/guides/evaluation/)
[
📄️ Extending LangChain.js
--------------------------
Extending LangChain's base abstractions, whether you're planning to contribute back to the open-source repo or build a bespoke internal integration, is encouraged.
](/v0.1/docs/guides/extending_langchain/)
[
📄️ Fallbacks
-------------
When working with language models, you may often encounter issues from the underlying APIs, e.g. rate limits or downtime.
](/v0.1/docs/guides/fallbacks/)
[
📄️ LangSmith Walkthrough
-------------------------
LangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will have to iterate on your prompts, chains, and other components to build a high-quality product.
](/v0.1/docs/guides/langsmith_evaluation/)
[
📄️ Migrating to 0.1
--------------------
If you're still using the pre 0.1 version of LangChain, but want to upgrade to the latest version, we've created a script that can handle almost every aspect of the migration for you.
](/v0.1/docs/guides/migrating/)
[
Previous
Security
](/v0.1/docs/security/)[
Next
Debugging
](/v0.1/docs/guides/debugging/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/platforms/microsoft | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Providers](/v0.2/docs/integrations/platforms/)
* Microsoft
On this page
Microsoft
=========
All functionality related to `Microsoft Azure` and other `Microsoft` products.
LLM[](#llm "Direct link to LLM")
---------------------------------
### Azure OpenAI[](#azure-openai "Direct link to Azure OpenAI")
> [Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
> [Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) is an `Azure` service with powerful language models from `OpenAI` including the `GPT-3`, `Codex` and `Embeddings model` series for content generation, summarization, semantic search, and natural language to code translation.
Set the environment variables to get access to the `Azure OpenAI` service.
Inside an environment variables file (`.env`).
AZURE_OPENAI_API_KEY="YOUR-API-KEY"AZURE_OPENAI_API_VERSION="YOUR-BASE-URL"AZURE_OPENAI_API_INSTANCE_NAME="YOUR-INSTANCE-NAME"AZURE_OPENAI_API_DEPLOYMENT_NAME="YOUR-DEPLOYMENT-NAME"AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME="YOUR-EMBEDDINGS-NAME"
See a [usage example](/v0.2/docs/integrations/llms/azure).
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
Text Embedding Models[](#text-embedding-models "Direct link to Text Embedding Models")
---------------------------------------------------------------------------------------
### Azure OpenAI[](#azure-openai-1 "Direct link to Azure OpenAI")
See a [usage example](/v0.2/docs/integrations/text_embedding/azure_openai)
import { OpenAIEmbeddings } from "@langchain/openai";
const embeddings = new OpenAIEmbeddings({ azureOpenAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME});
Chat Models[](#chat-models "Direct link to Chat Models")
---------------------------------------------------------
### Azure OpenAI[](#azure-openai-2 "Direct link to Azure OpenAI")
See a [usage example](/v0.2/docs/integrations/chat/azure)
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ temperature: 0.9, azureOpenAIApiKey: "SOME_SECRET_VALUE", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY azureOpenAIApiVersion: "YOUR-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION azureOpenAIApiInstanceName: "{MY_INSTANCE_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME azureOpenAIApiDeploymentName: "{DEPLOYMENT_NAME}", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME});
Document loaders[](#document-loaders "Direct link to Document loaders")
------------------------------------------------------------------------
### Azure Blob Storage[](#azure-blob-storage "Direct link to Azure Blob Storage")
> [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.
> [Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol, Network File System (`NFS`) protocol, and `Azure Files REST API`. `Azure Files` are based on the `Azure Blob Storage`.
`Azure Blob Storage` is designed for:
* Serving images or documents directly to a browser.
* Storing files for distributed access.
* Streaming video and audio.
* Writing to log files.
* Storing data for backup and restore, disaster recovery, and archiving.
* Storing data for analysis by an on-premises or Azure-hosted service.
* npm
* Yarn
* pnpm
npm install @azure/storage-blob
yarn add @azure/storage-blob
pnpm add @azure/storage-blob
See a [usage example for the Azure Blob Storage](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container).
import { AzureBlobStorageContainerLoader } from "langchain/document_loaders/web/azure_blob_storage_container";
See a [usage example for the Azure Files](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file).
import { AzureBlobStorageFileLoader } from "langchain/document_loaders/web/azure_blob_storage_file";
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Google
](/v0.2/docs/integrations/platforms/google)[
Next
OpenAI
](/v0.2/docs/integrations/platforms/openai)
* [LLM](#llm)
* [Azure OpenAI](#azure-openai)
* [Text Embedding Models](#text-embedding-models)
* [Azure OpenAI](#azure-openai-1)
* [Chat Models](#chat-models)
* [Azure OpenAI](#azure-openai-2)
* [Document loaders](#document-loaders)
* [Azure Blob Storage](#azure-blob-storage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/platforms/aws | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Providers](/v0.2/docs/integrations/platforms/)
* AWS
On this page
AWS
===
All functionality related to [Amazon AWS](https://aws.amazon.com/) platform
LLMs[](#llms "Direct link to LLMs")
------------------------------------
### Bedrock[](#bedrock "Direct link to Bedrock")
See a [usage example](/v0.2/docs/integrations/llms/bedrock).
import { Bedrock } from "langchain/llms/bedrock";
### SageMaker Endpoint[](#sagemaker-endpoint "Direct link to SageMaker Endpoint")
> [Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.
We use `SageMaker` to host our model and expose it as the `SageMaker Endpoint`.
See a [usage example](/v0.2/docs/integrations/llms/aws_sagemaker).
import { SagemakerEndpoint, SageMakerLLMContentHandler,} from "langchain/llms/sagemaker_endpoint";
Text Embedding Models[](#text-embedding-models "Direct link to Text Embedding Models")
---------------------------------------------------------------------------------------
### Bedrock[](#bedrock-1 "Direct link to Bedrock")
See a [usage example](/v0.2/docs/integrations/text_embedding/bedrock).
import { BedrockEmbeddings } from "langchain/embeddings/bedrock";
Document loaders[](#document-loaders "Direct link to Document loaders")
------------------------------------------------------------------------
### AWS S3 Directory and File[](#aws-s3-directory-and-file "Direct link to AWS S3 Directory and File")
> [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) is an object storage service. [AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) >[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
See a [usage example for S3FileLoader](/v0.2/docs/integrations/document_loaders/web_loaders/s3).
* npm
* Yarn
* pnpm
npm install @aws-sdk/client-s3
yarn add @aws-sdk/client-s3
pnpm add @aws-sdk/client-s3
import { S3Loader } from "langchain/document_loaders/web/s3";
Memory[](#memory "Direct link to Memory")
------------------------------------------
### AWS DynamoDB[](#aws-dynamodb "Direct link to AWS DynamoDB")
> [AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html) is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
* npm
* Yarn
* pnpm
npm install @aws-sdk/client-dynamodb
yarn add @aws-sdk/client-dynamodb
pnpm add @aws-sdk/client-dynamodb
See a [usage example](/v0.2/docs/integrations/chat_memory/dynamodb).
import { DynamoDBChatMessageHistory } from "@langchain/community/stores/message/dynamodb";
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Anthropic
](/v0.2/docs/integrations/platforms/anthropic)[
Next
Google
](/v0.2/docs/integrations/platforms/google)
* [LLMs](#llms)
* [Bedrock](#bedrock)
* [SageMaker Endpoint](#sagemaker-endpoint)
* [Text Embedding Models](#text-embedding-models)
* [Bedrock](#bedrock-1)
* [Document loaders](#document-loaders)
* [AWS S3 Directory and File](#aws-s3-directory-and-file)
* [Memory](#memory)
* [AWS DynamoDB](#aws-dynamodb)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/components | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* Components
Components
==========
[
🗃️ Chat models
---------------
29 items
](/v0.2/docs/integrations/chat/)
[
🗃️ LLMs
--------
25 items
](/v0.2/docs/integrations/llms/)
[
🗃️ Embedding models
--------------------
24 items
](/v0.2/docs/integrations/text_embedding)
[
🗃️ Document loaders
--------------------
2 items
](/v0.2/docs/integrations/document_loaders)
[
🗃️ Document transformers
-------------------------
3 items
](/v0.2/docs/integrations/document_transformers)
[
🗃️ Vector stores
-----------------
45 items
](/v0.2/docs/integrations/vectorstores)
[
🗃️ Retrievers
--------------
14 items
](/v0.2/docs/integrations/retrievers)
[
🗃️ Tools
---------
19 items
](/v0.2/docs/integrations/tools)
[
🗃️ Toolkits
------------
6 items
](/v0.2/docs/integrations/toolkits)
[
🗃️ Stores
----------
7 items
](/v0.2/docs/integrations/stores/)
[
Previous
OpenAI
](/v0.2/docs/integrations/platforms/openai)[
Next
Chat models
](/v0.2/docs/integrations/chat/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/llms/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [AI21](/v0.2/docs/integrations/llms/ai21)
* [AlephAlpha](/v0.2/docs/integrations/llms/aleph_alpha)
* [AWS SageMakerEndpoint](/v0.2/docs/integrations/llms/aws_sagemaker)
* [Azure OpenAI](/v0.2/docs/integrations/llms/azure)
* [Bedrock](/v0.2/docs/integrations/llms/bedrock)
* [Cloudflare Workers AI](/v0.2/docs/integrations/llms/cloudflare_workersai)
* [Cohere](/v0.2/docs/integrations/llms/cohere)
* [Fireworks](/v0.2/docs/integrations/llms/fireworks)
* [Friendli](/v0.2/docs/integrations/llms/friendli)
* [(Legacy) Google PaLM/VertexAI](/v0.2/docs/integrations/llms/google_palm)
* [Google Vertex AI](/v0.2/docs/integrations/llms/google_vertex_ai)
* [Gradient AI](/v0.2/docs/integrations/llms/gradient_ai)
* [HuggingFaceInference](/v0.2/docs/integrations/llms/huggingface_inference)
* [Llama CPP](/v0.2/docs/integrations/llms/llama_cpp)
* [NIBittensor](/v0.2/docs/integrations/llms/ni_bittensor)
* [Ollama](/v0.2/docs/integrations/llms/ollama)
* [OpenAI](/v0.2/docs/integrations/llms/openai)
* [PromptLayer OpenAI](/v0.2/docs/integrations/llms/prompt_layer_openai)
* [RaycastAI](/v0.2/docs/integrations/llms/raycast)
* [Replicate](/v0.2/docs/integrations/llms/replicate)
* [Together AI](/v0.2/docs/integrations/llms/togetherai)
* [WatsonX AI](/v0.2/docs/integrations/llms/watsonx_ai)
* [Writer](/v0.2/docs/integrations/llms/writer)
* [YandexGPT](/v0.2/docs/integrations/llms/yandex)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* LLMs
On this page
LLMs
====
Features (natively supported)[](#features-natively-supported "Direct link to Features (natively supported)")
-------------------------------------------------------------------------------------------------------------
All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. `invoke`, `batch`, `stream`, `map`. This gives all LLMs basic support for invoking, streaming, batching and mapping requests, which by default is implemented as below:
* _Streaming_ support defaults to returning an `AsyncIterator` of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations.
* _Batch_ support defaults to calling the underlying LLM in parallel for each input. The concurrency can be controlled with the `maxConcurrency` key in `RunnableConfig`.
* _Map_ support defaults to calling `.invoke` across all instances of the array which it was called on.
Each LLM integration can optionally provide native implementations for invoke, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support.
Model
Invoke
Stream
Batch
AI21
✅
❌
✅
AlephAlpha
✅
❌
✅
AzureOpenAI
✅
✅
✅
CloudflareWorkersAI
✅
✅
✅
Cohere
✅
❌
✅
Fireworks
✅
✅
✅
GooglePaLM
✅
❌
✅
HuggingFaceInference
✅
❌
✅
LlamaCpp
✅
✅
✅
Ollama
✅
✅
✅
OpenAI
✅
✅
✅
OpenAIChat
✅
✅
✅
Portkey
✅
✅
✅
Replicate
✅
❌
✅
SageMakerEndpoint
✅
✅
✅
Writer
✅
❌
✅
YandexGPT
✅
❌
✅
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
ZhipuAI
](/v0.2/docs/integrations/chat/zhipuai)[
Next
LLMs
](/v0.2/docs/integrations/llms/)
* [Features (natively supported)](#features-natively-supported)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Document loaders
Document loaders
================
[
🗃️ File Loaders
----------------
14 items
](/v0.2/docs/integrations/document_loaders/file_loaders/)
[
🗃️ Web Loaders
---------------
27 items
](/v0.2/docs/integrations/document_loaders/web_loaders/)
[
Previous
ZhipuAI
](/v0.2/docs/integrations/text_embedding/zhipuai)[
Next
File Loaders
](/v0.2/docs/integrations/document_loaders/file_loaders/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/retrievers | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Knowledge Bases for Amazon Bedrock](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)
* [Chaindesk Retriever](/v0.2/docs/integrations/retrievers/chaindesk-retriever)
* [ChatGPT Plugin Retriever](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin)
* [Dria Retriever](/v0.2/docs/integrations/retrievers/dria)
* [Exa Search](/v0.2/docs/integrations/retrievers/exa)
* [HyDE Retriever](/v0.2/docs/integrations/retrievers/hyde)
* [Amazon Kendra Retriever](/v0.2/docs/integrations/retrievers/kendra-retriever)
* [Metal Retriever](/v0.2/docs/integrations/retrievers/metal-retriever)
* [Supabase Hybrid Search](/v0.2/docs/integrations/retrievers/supabase-hybrid)
* [Tavily Search API](/v0.2/docs/integrations/retrievers/tavily)
* [Time-Weighted Retriever](/v0.2/docs/integrations/retrievers/time-weighted-retriever)
* [Vector Store](/v0.2/docs/integrations/retrievers/vectorstore)
* [Vespa Retriever](/v0.2/docs/integrations/retrievers/vespa-retriever)
* [Zep Retriever](/v0.2/docs/integrations/retrievers/zep-retriever)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Retrievers
Retrievers
==========
[
📄️ Knowledge Bases for Amazon Bedrock
--------------------------------------
Knowledge Bases for Amazon Bedrock is a fully managed support for end-to-end RAG workflow provided by Amazon Web Services (AWS). It provides an entire ingestion workflow of converting your documents into embeddings (vector) and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora (coming soon), and MongoDB (coming soon).
](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)
[
📄️ Chaindesk Retriever
-----------------------
This example shows how to use the Chaindesk Retriever in a retrieval chain to retrieve documents from a Chaindesk.ai datastore.
](/v0.2/docs/integrations/retrievers/chaindesk-retriever)
[
📄️ ChatGPT Plugin Retriever
----------------------------
This module has been deprecated and is no longer supported. The documentation below will not work in versions 0.2.0 or later.
](/v0.2/docs/integrations/retrievers/chatgpt-retriever-plugin)
[
📄️ Dria Retriever
------------------
The Dria retriever allows an agent to perform a text-based search across a comprehensive knowledge hub.
](/v0.2/docs/integrations/retrievers/dria)
[
📄️ Exa Search
--------------
The Exa Search API provides a new search experience designed for LLMs.
](/v0.2/docs/integrations/retrievers/exa)
[
📄️ HyDE Retriever
------------------
This example shows how to use the HyDE Retriever, which implements Hypothetical Document Embeddings (HyDE) as described in this paper.
](/v0.2/docs/integrations/retrievers/hyde)
[
📄️ Amazon Kendra Retriever
---------------------------
Amazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.
](/v0.2/docs/integrations/retrievers/kendra-retriever)
[
📄️ Metal Retriever
-------------------
This example shows how to use the Metal Retriever in a retrieval chain to retrieve documents from a Metal index.
](/v0.2/docs/integrations/retrievers/metal-retriever)
[
📄️ Supabase Hybrid Search
--------------------------
Langchain supports hybrid search with a Supabase Postgres database. The hybrid search combines the postgres pgvector extension (similarity search) and Full-Text Search (keyword search) to retrieve documents. You can add documents via SupabaseVectorStore addDocuments function. SupabaseHybridKeyWordSearch accepts embedding, supabase client, number of results for similarity search, and number of results for keyword search as parameters. The getRelevantDocuments function produces a list of documents that has duplicates removed and is sorted by relevance score.
](/v0.2/docs/integrations/retrievers/supabase-hybrid)
[
📄️ Tavily Search API
---------------------
Tavily's Search API is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.
](/v0.2/docs/integrations/retrievers/tavily)
[
📄️ Time-Weighted Retriever
---------------------------
A Time-Weighted Retriever is a retriever that takes into account recency in addition to similarity. The scoring algorithm is:
](/v0.2/docs/integrations/retrievers/time-weighted-retriever)
[
📄️ Vector Store
----------------
Once you've created a Vector Store, the way to use it as a Retriever is very simple:
](/v0.2/docs/integrations/retrievers/vectorstore)
[
📄️ Vespa Retriever
-------------------
This shows how to use Vespa.ai as a LangChain retriever.
](/v0.2/docs/integrations/retrievers/vespa-retriever)
[
📄️ Zep Retriever
-----------------
Zep is a long-term memory service for AI Assistant apps.
](/v0.2/docs/integrations/retrievers/zep-retriever)
[
Previous
Zep
](/v0.2/docs/integrations/vectorstores/zep)[
Next
Knowledge Bases for Amazon Bedrock
](/v0.2/docs/integrations/retrievers/bedrock-knowledge-bases)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/stores/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [Cassandra KV](/v0.2/docs/integrations/stores/cassandra_storage)
* [File System Store](/v0.2/docs/integrations/stores/file_system)
* [In Memory Store](/v0.2/docs/integrations/stores/in_memory)
* [Stores](/v0.2/docs/integrations/stores/)
* [IORedis](/v0.2/docs/integrations/stores/ioredis_storage)
* [Upstash Redis](/v0.2/docs/integrations/stores/upstash_redis_storage)
* [Vercel KV](/v0.2/docs/integrations/stores/vercel_kv_storage)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Stores
On this page
Stores
======
Storing data in key value format is quick and efficient, and can be a powerful tool for LLM applications. The `BaseStore` class provides a simple interface for getting, setting, deleting and iterating over lists of key value pairs.
The public API of `BaseStore` in LangChain JS offers four main methods:
abstract mget(keys: K[]): Promise<(V | undefined)[]>;abstract mset(keyValuePairs: [K, V][]): Promise<void>;abstract mdelete(keys: K[]): Promise<void>;abstract yieldKeys(prefix?: string): AsyncGenerator<K | string>;
The `m` prefix stands for multiple, and indicates that these methods can be used to get, set and delete multiple key value pairs at once. The `yieldKeys` method is a generator function that can be used to iterate over all keys in the store, or all keys with a given prefix.
It's that simple!
So far LangChain.js has two base integrations for `BaseStore`:
* [`InMemoryStore`](/v0.2/docs/integrations/stores/in_memory)
* [`LocalFileStore`](/v0.2/docs/integrations/stores/file_system) (Node.js only)
Use Cases[](#use-cases "Direct link to Use Cases")
---------------------------------------------------
### Chat history[](#chat-history "Direct link to Chat history")
If you're building web apps with chat, the `BaseStore` family of integrations can come in very handy for storing and retrieving chat history.
### Caching[](#caching "Direct link to Caching")
The `BaseStore` family can be a useful alternative to our other caching integrations. For example the [`LocalFileStore`](/v0.2/docs/integrations/stores/file_system) allows for persisting data through the file system. It also is incredibly fast, so your users will be able to access cached data in a snap.
See the individual sections for deeper dives on specific storage providers.
Reading Data[](#reading-data "Direct link to Reading Data")
------------------------------------------------------------
### In Memory[](#in-memory "Direct link to In Memory")
Reading data is simple with KV stores. Below is an example using the [`InMemoryStore`](/v0.2/docs/integrations/stores/in_memory) and the `.mget()` method. We'll also set our generic value type to `string` so we can have type safety setting our strings.
Import the [`InMemoryStore`](/v0.2/docs/integrations/stores/in_memory) class.
import { InMemoryStore } from "langchain/storage/in_memory";
Instantiate a new instance and pass `string` as our generic for the value type.
const store = new InMemoryStore<string>();
Next we can call `.mset()` to write multiple values at once.
const data: [string, string][] = [ ["key1", "value1"], ["key2", "value2"],];await store.mset(data);
Finally, call the `.mget()` method to retrieve the values from our store.
const data = await store.mget(["key1", "key2"]);console.log(data);/** * ["value1", "value2"] */
### File System[](#file-system "Direct link to File System")
When using the file system integration we need to instantiate via the `fromPath` method. This is required because it needs to preform checks to ensure the directory exists and is readable/writable. You also must use a directory when using [`LocalFileStore`](/v0.2/docs/integrations/stores/file_system) because each entry is stored as a unique file in the directory.
import { LocalFileStore } from "langchain/storage/file_system";
const pathToStore = "./my-store-directory";const store = await LocalFileStore.fromPath(pathToStore);
To do this we can define an encoder for initially setting our data, and a decoder for when we retrieve data.
const encoder = new TextEncoder();const decoder = new TextDecoder();
const data: [string, Uint8Array][] = [ ["key1", encoder.encode(new Date().toDateString())], ["key2", encoder.encode(new Date().toDateString())],];await store.mset(data);
const data = await store.mget(["key1", "key2"]);console.log(data.map((v) => decoder.decode(v)));/** * [ 'Wed Jan 03 2024', 'Wed Jan 03 2024' ] */
Writing Data[](#writing-data "Direct link to Writing Data")
------------------------------------------------------------
### In Memory[](#in-memory-1 "Direct link to In Memory")
Writing data is simple with KV stores. Below is an example using the [`InMemoryStore`](/v0.2/docs/integrations/stores/in_memory) and the `.mset()` method. We'll also set our generic value type to `Date` so we can have type safety setting our dates.
Import the [`InMemoryStore`](/v0.2/docs/integrations/stores/in_memory) class.
import { InMemoryStore } from "langchain/storage/in_memory";
Instantiate a new instance and pass `Date` as our generic for the value type.
const store = new InMemoryStore<Date>();
Finally we can call `.mset()` to write multiple values at once.
const data: [string, Date][] = [ ["date1", new Date()], ["date2", new Date()],];await store.mset(data);
### File System[](#file-system-1 "Direct link to File System")
When using the file system integration we need to instantiate via the `fromPath` method. This is required because it needs to preform checks to ensure the directory exists and is readable/writable. You also must use a directory when using [`LocalFileStore`](/v0.2/docs/integrations/stores/file_system) because each entry is stored as a unique file in the directory.
import { LocalFileStore } from "langchain/storage/file_system";
const pathToStore = "./my-store-directory";const store = await LocalFileStore.fromPath(pathToStore);
When defining our data we must convert the values to `Uint8Array` because the file system integration only supports binary data.
To do this we can define an encoder for initially setting our data, and a decoder for when we retrieve data.
const encoder = new TextEncoder();const decoder = new TextDecoder();
const data: [string, Uint8Array][] = [ ["key1", encoder.encode(new Date().toDateString())], ["key2", encoder.encode(new Date().toDateString())],];await store.mset(data);
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
VectorStore Agent Toolkit
](/v0.2/docs/integrations/toolkits/vectorstore)[
Next
Cassandra KV
](/v0.2/docs/integrations/stores/cassandra_storage)
* [Use Cases](#use-cases)
* [Chat history](#chat-history)
* [Caching](#caching)
* [Reading Data](#reading-data)
* [In Memory](#in-memory)
* [File System](#file-system)
* [Writing Data](#writing-data)
* [In Memory](#in-memory-1)
* [File System](#file-system-1)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_transformers | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [html-to-text](/v0.2/docs/integrations/document_transformers/html-to-text)
* [@mozilla/readability](/v0.2/docs/integrations/document_transformers/mozilla_readability)
* [OpenAI functions metadata tagger](/v0.2/docs/integrations/document_transformers/openai_metadata_tagger)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* Document transformers
Document transformers
=====================
[
📄️ html-to-text
----------------
When ingesting HTML documents for later retrieval, we are often interested only in the actual content of the webpage rather than semantics.
](/v0.2/docs/integrations/document_transformers/html-to-text)
[
📄️ @mozilla/readability
------------------------
When ingesting HTML documents for later retrieval, we are often interested only in the actual content of the webpage rather than semantics.
](/v0.2/docs/integrations/document_transformers/mozilla_readability)
[
📄️ OpenAI functions metadata tagger
------------------------------------
It can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.
](/v0.2/docs/integrations/document_transformers/openai_metadata_tagger)
[
Previous
YouTube transcripts
](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)[
Next
html-to-text
](/v0.2/docs/integrations/document_transformers/html-to-text)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing/code | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* Contribute Code
On this page
Contribute Code
===============
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow. Please do not try to push directly to this repo unless you are a maintainer.
Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant maintainers.
Pull requests cannot land without passing the formatting, linting, and testing checks first. See [Testing](#testing) and [Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
* Fix a bug
* Add a relevant unit or integration test when possible. These live in `**/tests/*.test.ts` and `**/tests/*.int.test.ts/`.
* Make an improvement
* Update any affected example notebooks and documentation. These live in `docs`.
* Update unit and integration tests when relevant.
* Add a feature
* Add a demo notebook/MDX file in `docs/core_docs/docs`.
* Add unit and integration tests.
We are a small, progress-oriented team. If there's something you'd like to add or change, opening a pull request is the best way to get our attention.
🚀 Quick Start[](#-quick-start "Direct link to 🚀 Quick Start")
----------------------------------------------------------------
This quick start guide explains how to run the repository locally. For a [development container](https://containers.dev/), see the [.devcontainer folder](https://github.com/langchain-ai/langchainjs/tree/main/.devcontainer).
### 🏭 Release process[](#-release-process "Direct link to 🏭 Release process")
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a developer and published to [npm](https://www.npmjs.com/package/langchain).
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software, even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)! If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
#### Integration releases[](#integration-releases "Direct link to Integration releases")
The release script can be executed only while on a fresh `main` branch, with no un-committed changes, from the package root. If working from a fork of the repository, make sure to sync the forked `main` branch with the upstream `main` branch first.
You can invoke the script by calling `yarn release`. If new dependencies have been added to the integration package, install them first (i.e. run `yarn`, then `yarn release`).
There are three parameters which can be passed to this script, one required and two optional.
* **Required**: `<workspace name>`. eg: `@langchain/core` The name of the package to release. Can be found in the `name` value of the package's `package.json`
* **Optional**: `--bump-deps` eg `--bump-deps` Will find all packages in the repo which depend on this workspace and checkout a new branch, update the dep version, run yarn install, commit & push to new branch. Generally, this is not necessary.
* **Optional**: `--tag <tag>` eg `--tag beta` Add a tag to the NPM release. Useful if you want to push a release candidate.
This script automatically bumps the package version, creates a new release branch with the changes, pushes the branch to GitHub, uses `release-it` to automatically release to NPM, and more depending on the flags passed.
Halfway through this script, you'll be prompted to enter an NPM OTP (typically from an authenticator app). This value is not stored anywhere and is only used to authenticate the NPM release.
> **Note** Unless releasing `langchain`, `no` should be answered to all prompts following `Publish @langchain/<package> to npm?`. Then, the change should be manually committed with the following commit message: `<package>[patch]: Release <new version>`. E.g.: `groq[patch]: Release 0.0.1`.
Docker must be running if releasing one of `langchain`, `@langchain/core` or `@langchain/community`. These packages run LangChain's export tests, which run inside docker containers.
Full example: `yarn release @langchain/core`.
### 🛠️ Tooling[](#️-tooling "Direct link to 🛠️ Tooling")
This project uses the following tools, which are worth getting familiar with if you plan to contribute:
* **[yarn](https://yarnpkg.com/) (v3.4.1)** - dependency management
* **[eslint](https://eslint.org/)** - enforcing standard lint rules
* **[prettier](https://prettier.io/)** - enforcing standard code formatting
* **[jest](https://jestjs.io/)** - testing code
* **[TypeDoc](https://typedoc.org/)** - reference doc generation from comments
* **[Docusaurus](https://docusaurus.io/)** - static site generation for documentation
🚀 Quick Start[](#-quick-start-1 "Direct link to 🚀 Quick Start")
------------------------------------------------------------------
Clone this repo, then cd into it:
cd langchainjs
Next, try running the following common tasks:
✅ Common Tasks[](#-common-tasks "Direct link to ✅ Common Tasks")
-----------------------------------------------------------------
Our goal is to make it as easy as possible for you to contribute to this project. All of the below commands should be run from within a workspace directory (e.g. `langchain`, `libs/langchain-community`) unless otherwise noted.
cd langchain
Or, if you are working on a community integration:
cd libs/langchain-community
### Setup[](#setup "Direct link to Setup")
**Prerequisite**: Node version 18+ is required. Please check node version `node -v` and update it if required.
To get started, you will need to install the dependencies for the project. To do so, run:
yarn
Then, you will need to switch directories into `langchain-core` and build core by running:
cd ../langchain-coreyarnyarn build
### Linting[](#linting "Direct link to Linting")
We use [eslint](https://eslint.org/) to enforce standard lint rules. To run the linter, run:
yarn lint
### Formatting[](#formatting "Direct link to Formatting")
We use [prettier](https://prettier.io) to enforce code formatting style. To run the formatter, run:
yarn format
To just check for formatting differences, without fixing them, run:
yarn format:check
### Testing[](#testing "Direct link to Testing")
In general, tests should be added within a `tests/` folder alongside the modules they are testing.
**Unit tests** cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test. Unit tests should be called `*.test.ts`.
To run only unit tests, run:
yarn test
#### Running a single test[](#running-a-single-test "Direct link to Running a single test")
To run a single test, run the following from within a workspace:
yarn test:single /path/to/yourtest.test.ts
This is useful for developing individual features.
**Integration tests** cover logic that requires making calls to outside APIs (often integration with other services).
If you add support for a new external API, please add a new integration test. Integration tests should be called `*.int.test.ts`.
Note that most integration tests require credentials or other setup. You will likely need to set up a `langchain/.env` or `libs/langchain-community/.env` file like the example [here](https://github.com/langchain-ai/langchainjs/blob/main/langchain/.env.example).
We generally recommend only running integration tests with `yarn test:single`, but if you want to run all integration tests, run:
yarn test:integration
### Building[](#building "Direct link to Building")
To build the project, run:
yarn build
### Adding an Entrypoint[](#adding-an-entrypoint "Direct link to Adding an Entrypoint")
LangChain exposes multiple subpaths the user can import from, e.g.
import { OpenAI } from "langchain/llms/openai";
We call these subpaths "entrypoints". In general, you should create a new entrypoint if you are adding a new integration with a 3rd party library. If you're adding self-contained functionality without any external dependencies, you can add it to an existing entrypoint.
In order to declare a new entrypoint that users can import from, you should edit the `langchain/langchain.config.js` or `libs/langchain-community/langchain.config.js` file. To add an entrypoint `tools` that imports from `tools/index.ts` you'd add the following to the `entrypoints` key inside the `config` variable:
// ...entrypoints: { // ... tools: "tools/index",},// ...
If you're adding a new integration which requires installing a third party dependency, you must add the entrypoint to the `requiresOptionalDependency` array, also located inside `langchain/langchain.config.js` or `libs/langchain-community/langchain.config.js`.
// ...requiresOptionalDependency: [ // ... "tools/index",],// ...
This will make sure the entrypoint is included in the published package, and in generated documentation.
Documentation[](#documentation "Direct link to Documentation")
---------------------------------------------------------------
### Contribute Documentation[](#contribute-documentation "Direct link to Contribute Documentation")
#### Install dependencies[](#install-dependencies "Direct link to Install dependencies")
##### Note: you only need to follow these steps if you are building the docs site locally.[](#note-you-only-need-to-follow-these-steps-if-you-are-building-the-docs-site-locally "Direct link to Note: you only need to follow these steps if you are building the docs site locally.")
1. [Quarto](https://quarto.org/) - package that converts Jupyter notebooks (`.ipynb` files) into `.mdx` files for serving in Docusaurus.
2. `yarn build --filter=core_docs` - It's as simple as that! (or you can simply run `yarn build` from `docs/core_docs/`)
All notebooks are converted to `.md` files and automatically gitignored. If you would like to create a non notebook doc, it must be a `.mdx` file.
### Writing Notebooks[](#writing-notebooks "Direct link to Writing Notebooks")
When adding new dependencies inside the notebook you must update the import map inside `deno.json` in the root of the LangChain repo.
This is required because the notebooks use the Deno runtime, and Deno formats imports differently than Node.js.
Example:
// Import in Node:import { z } from "zod";// Import in Deno:import { z } from "npm:/zod";
See examples inside `deno.json` for more details.
Docs are largely autogenerated by [TypeDoc](https://typedoc.org/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
Documentation and the skeleton lives under the `docs/` folder. Example code is imported from under the `examples/` folder.
### Running examples[](#running-examples "Direct link to Running examples")
If you add a new major piece of functionality, it is helpful to add an example to showcase how to use it. Most of our users find examples to be the most helpful kind of documentation.
Examples can be added in the `examples/src` directory, e.g. `examples/src/path/to/example`. This example can then be invoked with `yarn example path/to/example` at the top level of the repo.
To run examples that require an environment variable, you'll need to add a `.env` file under `examples/.env`.
### Build Documentation Locally[](#build-documentation-locally "Direct link to Build Documentation Locally")
To generate and view the documentation locally, change to the project root and run `yarn` to ensure dependencies get installed in both the `docs/` and `examples/` workspaces:
cd ..yarn
Then run:
yarn docs
Advanced[](#advanced "Direct link to Advanced")
------------------------------------------------
**Environment tests** test whether LangChain works across different JS environments, including Node.js (both ESM and CJS), Edge environments (eg. Cloudflare Workers), and browsers (using Webpack).
To run the environment tests with Docker, run the following command from the project root:
yarn test:exports:docker
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Repository Structure
](/v0.2/docs/contributing/repo_structure)[
Next
Testing
](/v0.2/docs/contributing/testing)
* [🚀 Quick Start](#-quick-start)
* [🏭 Release process](#-release-process)
* [🛠️ Tooling](#️-tooling)
* [🚀 Quick Start](#-quick-start-1)
* [✅ Common Tasks](#-common-tasks)
* [Setup](#setup)
* [Linting](#linting)
* [Formatting](#formatting)
* [Testing](#testing)
* [Building](#building)
* [Adding an Entrypoint](#adding-an-entrypoint)
* [Documentation](#documentation)
* [Contribute Documentation](#contribute-documentation)
* [Writing Notebooks](#writing-notebooks)
* [Running examples](#running-examples)
* [Build Documentation Locally](#build-documentation-locally)
* [Advanced](#advanced)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing/repo_structure | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* Repository Structure
On this page
Repository Structure
====================
If you plan on contributing to LangChain code or documentation, it can be useful to understand the high level structure of the repository.
LangChain is organized as a [monorepo](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
Here's the structure visualized as a tree:
.├── docs│ ├── core_docs # Contains content for the documentation here: https://js.langchain.com/│ ├── api_refs # Contains content for the API refs here: https://v02.api.js.langchain.com/├── langchain # Main package│ ├── src/**/tests/*.test.ts/ # Unit tests (present in each package not shown for brevity)│ ├── src/**/tests/*.int.test.ts/ # Integration tests (present in each package not shown for brevity)├── langchain # Base interfaces for key abstractions├── libs # Community packages│ ├── langchain-community # Third-party integrations│ ├── langchain-partner-1│ ├── langchain-partner-2│ ├── ...
The root directory also contains the following files:
* `package.json`: Dependencies for building docs and linting docs.
There are other files in the root directory level, but their presence should be self-explanatory. Feel free to browse around!
Documentation[](#documentation "Direct link to Documentation")
---------------------------------------------------------------
The `/docs` directory contains the content for the documentation that is shown at [https://js.langchain.com/](https://js.langchain.com/) and the associated API Reference [https://v02.api.js.langchain.com/](https://v02.api.js.langchain.com/)
See the [documentation](/v0.2/docs/contributing/documentation/style_guide) guidelines to learn how to contribute to the documentation.
Code[](#code "Direct link to Code")
------------------------------------
The `/libs` directory contains the code for the LangChain packages.
To learn more about how to contribute code see the following guidelines:
* [Code](/v0.2/docs/contributing/code) Learn how to develop in the LangChain codebase.
* [Integrations](/v0.2/docs/contributing/integrations) to learn how to contribute to third-party integrations to langchain-community or to start a new partner package.
* [Testing](/v0.2/docs/contributing/testing) guidelines to learn how to write tests for the packages.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Welcome Contributors
](/v0.2/docs/contributing/)[
Next
Contribute Code
](/v0.2/docs/contributing/code)
* [Documentation](#documentation)
* [Code](#code)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing/testing | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* Testing
On this page
Testing
=======
In general, tests should be added within a `tests/` folder alongside the modules they are testing.
**Unit tests** cover modular logic that does not require calls to outside APIs.
If you add new logic, please add a unit test. Unit tests should be called `*.test.ts`.
To run only unit tests, run:
yarn test
### Running a single test[](#running-a-single-test "Direct link to Running a single test")
To run a single test, run the following from within a workspace:
yarn test:single /path/to/yourtest.test.ts
This is useful for developing individual features.
**Integration tests** cover logic that requires making calls to outside APIs (often integration with other services).
If you add support for a new external API, please add a new integration test. Integration tests should be called `*.int.test.ts`.
Note that most integration tests require credentials or other setup. You will likely need to set up a `langchain/.env` or `libs/langchain-community/.env` file like the example [here](https://github.com/langchain-ai/langchainjs/blob/main/langchain/.env.example).
We generally recommend only running integration tests with `yarn test:single`, but if you want to run all integration tests, run:
yarn test:integration
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Contribute Code
](/v0.2/docs/contributing/code)[
Next
Style guide
](/v0.2/docs/contributing/documentation/style_guide)
* [Running a single test](#running-a-single-test)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing/faq | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* FAQ
On this page
Frequently Asked Questions
==========================
Pull Requests (PRs)[](#pull-requests-prs "Direct link to Pull Requests (PRs)")
-------------------------------------------------------------------------------
### How do I allow maintainers to edit my PR?[](#how-do-i-allow-maintainers-to-edit-my-pr "Direct link to How do I allow maintainers to edit my PR?")
When you submit a pull request, there may be additional changes necessary before merging it. Oftentimes, it is more efficient for the maintainers to make these changes themselves before merging, rather than asking you to do so in code review.
By default, most pull requests will have a `✅ Maintainers are allowed to edit this pull request.` badge in the right-hand sidebar.
If you do not see this badge, you may have this setting off for the fork you are pull-requesting from. See [this Github docs page](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) for more information.
Notably, Github doesn't allow this setting to be enabled for forks in **organizations** ([issue](https://github.com/orgs/community/discussions/5634)). If you are working in an organization, we recommend submitting your PR from a personal fork in order to enable this setting.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Contribute Integrations
](/v0.2/docs/contributing/integrations)
* [Pull Requests (PRs)](#pull-requests-prs)
* [How do I allow maintainers to edit my PR?](#how-do-i-allow-maintainers-to-edit-my-pr)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing/documentation/style_guide | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Style guide](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* Documentation
* Style guide
On this page
LangChain Documentation Style Guide
===================================
Introduction[](#introduction "Direct link to Introduction")
------------------------------------------------------------
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too. This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around organization and structure.
Philosophy[](#philosophy "Direct link to Philosophy")
------------------------------------------------------
LangChain's documentation aspires to follow the [Diataxis framework](https://diataxis.fr). Under this framework, all documentation falls under one of four categories:
* **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
* An example of this is our [LCEL streaming guide](/v0.2/docs/how_to/streaming).
* Our guides on [custom components](/v0.2/docs/how_to/custom_chat) is another one.
* **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
* The clearest examples of this are our [Use case](/v0.2/docs/how_to/#use-cases) pages.
* **Reference**: Technical descriptions of the machinery and how to operate it.
* Our [Runnable](/v0.2/docs/how_to/#langchain-expression-language-lcel) pages is an example of this.
* The [API reference pages](https://v02.api.js.langchain.com/) are another.
* **Explanation**: Explanations that clarify and illuminate a particular topic.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
Taxonomy[](#taxonomy "Direct link to Taxonomy")
------------------------------------------------
Keeping the above in mind, we have sorted LangChain's docs into categories. It is helpful to think in these terms when contributing new documentation:
### Getting started[](#getting-started "Direct link to Getting started")
The [getting started section](/v0.2/docs/introduction) includes a high-level introduction to LangChain, a quickstart that tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases[](#use-cases "Direct link to Use cases")
[Use cases](/v0.2/docs/how_to/#use-cases) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.). The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped, then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
The quickstart pages here should fit the **How-to guide** category, with the other pages intended to be **Explanations** of more in-depth concepts and strategies that accompany the main happy paths.
note
The below sections are listed roughly in order of increasing level of abstraction.
### Expression Language[](#expression-language "Direct link to Expression Language")
[LangChain Expression Language (LCEL)](/v0.2/docs/how_to/#langchain-expression-language-lcel) is the fundamental way that most LangChain components fit together, and this section is designed to teach developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors, and some **References** for how to use different methods in the Runnable interface.
### Components[](#components "Direct link to Components")
The [how to section](/v0.2/docs/how_to) covers concepts one level of abstraction higher than LCEL. Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes, such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
This section should contain mostly conceptual **Tutorials**, **References**, and **Explanations** of the components they cover.
note
As a general rule of thumb, everything covered in the `Expression Language` and `Components` sections (with the exception of the `Composition` section of components) should cover only components that exist in `@langchain/core`.
### Integrations[](#integrations "Direct link to Integrations")
The [integrations](/v0.2/docs/integrations/platforms/) are specific implementations of components. These often involve third-party APIs and services. If this is the case, as a general rule, these are maintained by the third-party partner.
This section should contain mostly **Explanations** and **References**, though the actual content here is more flexible than other sections and more at the discretion of the third-party provider.
note
Concepts covered in `Integrations` should generally exist in `@langchain/community` or specific partner packages.
### Tutorials and Ecosystem[](#tutorials-and-ecosystem "Direct link to Tutorials and Ecosystem")
The [Tutorials](/v0.2/docs/tutorials) and [Ecosystem](/v0.2/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above. This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
### API references[](#api-references "Direct link to API references")
LangChain's API references. Should act as **References** (as the name implies) with some **Explanation**\-focused content as well.
Sample developer journey[](#sample-developer-journey "Direct link to Sample developer journey")
------------------------------------------------------------------------------------------------
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
* The developer lands on [https://js.langchain.com](https://js.langchain.com), and reads through the introduction and the diagram.
* If they are just curious, they may be drawn to the [Quickstart](/v0.2/docs/tutorials/llm_chain) to get a high-level tour of what LangChain contains.
* If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
* They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
* Next, they can learn about LangChain's various components and integrations.
* Finally, they can get additional knowledge through the Guides.
This is only an ideal of course - sections will inevitably reference lower or higher-level concepts that are documented in other sections.
Guidelines[](#guidelines "Direct link to Guidelines")
------------------------------------------------------
Here are some other guidelines you should think about when writing and organizing documentation.
### Linking to other sections[](#linking-to-other-sections "Direct link to Linking to other sections")
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible to allow a developer to learn more about an unfamiliar topic inline.
This includes linking to the API references as well as conceptual sections!
### Conciseness[](#conciseness "Direct link to Conciseness")
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than re-explain it, unless the concept you are documenting presents some new wrinkle.
Be concise, including in code samples.
### General style[](#general-style "Direct link to General style")
* Use active voice and present tense whenever possible.
* Use examples and code snippets to illustrate concepts and usage.
* Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically.
* Use bullet points and numbered lists to break down information into easily digestible chunks.
* Use tables (especially for **Reference** sections) and diagrams often to present information visually.
* Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Testing
](/v0.2/docs/contributing/testing)[
Next
Contribute Integrations
](/v0.2/docs/contributing/integrations)
* [Introduction](#introduction)
* [Philosophy](#philosophy)
* [Taxonomy](#taxonomy)
* [Getting started](#getting-started)
* [Use cases](#use-cases)
* [Expression Language](#expression-language)
* [Components](#components)
* [Integrations](#integrations)
* [Tutorials and Ecosystem](#tutorials-and-ecosystem)
* [API references](#api-references)
* [Sample developer journey](#sample-developer-journey)
* [Guidelines](#guidelines)
* [Linking to other sections](#linking-to-other-sections)
* [Conciseness](#conciseness)
* [General style](#general-style)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/chains/popular/summarize | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
On this page
Summarization
=============
A summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic @langchain/openai
yarn add @langchain/anthropic @langchain/openai
pnpm add @langchain/anthropic @langchain/openai
import { OpenAI } from "@langchain/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce" });const res = await chain.invoke({ input_documents: docs,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [loadSummarizationChain](https://api.js.langchain.com/functions/langchain_chains.loadSummarizationChain.html) from `langchain/chains`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
Intermediate Steps[](#intermediate-steps "Direct link to Intermediate Steps")
------------------------------------------------------------------------------
We can also return the intermediate steps for `map_reduce` chains, should we want to inspect them. This is done with the `returnIntermediateSteps` parameter.
import { OpenAI } from "@langchain/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", returnIntermediateSteps: true,});const res = await chain.invoke({ input_documents: docs,});console.log({ res });/*{ res: { intermediateSteps: [ "In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains.", "The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens.", " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs.", ], text: "President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.", },}*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [loadSummarizationChain](https://api.js.langchain.com/functions/langchain_chains.loadSummarizationChain.html) from `langchain/chains`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
Streaming[](#streaming "Direct link to Streaming")
---------------------------------------------------
By passing a custom LLM to the internal `map_reduce` chain, we can stream the final output:
import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";import { ChatOpenAI } from "@langchain/openai";import { ChatAnthropic } from "@langchain/anthropic";// In this example, we use a separate LLM as the final summary LLM to meet our customized LLM requirements for different stages of the chain and to only stream the final results.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new ChatAnthropic({ temperature: 0 });const combineModel = new ChatOpenAI({ model: "gpt-4", temperature: 0, streaming: true, callbacks: [ { handleLLMNewToken(token: string): Promise<void> | void { console.log("token", token); /* token President token Biden ... ... token protections token . */ }, }, ],});const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 5000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", combineLLM: combineModel,});const res = await chain.invoke({ input_documents: docs,});console.log({ res });/* { res: { text: "President Biden delivered his first State of the Union address, focusing on the Russian invasion of Ukraine, domestic economic challenges, and his administration's efforts to revitalize American manufacturing and infrastructure. He announced new sanctions against Russia and the deployment of U.S. forces to NATO countries. Biden also outlined his plan to fight inflation, lower costs for American families, and reduce the deficit. He emphasized the need to pass the Bipartisan Innovation Act, confirmed his Federal Reserve nominees, and called for the end of COVID shutdowns. Biden also addressed issues such as gun violence, voting rights, immigration reform, women's rights, and privacy protections." } }*/
#### API Reference:
* [loadSummarizationChain](https://api.js.langchain.com/functions/langchain_chains.loadSummarizationChain.html) from `langchain/chains`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* * *
#### Help us out by providing feedback on this documentation page:
* [Intermediate Steps](#intermediate-steps)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/contributing/integrations | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Contributing](/v0.2/docs/contributing/)
* [Welcome Contributors](/v0.2/docs/contributing/)
* [Repository Structure](/v0.2/docs/contributing/repo_structure)
* [Contribute Code](/v0.2/docs/contributing/code)
* [Testing](/v0.2/docs/contributing/testing)
* [Documentation](/v0.2/docs/contributing/documentation/style_guide)
* [Contribute Integrations](/v0.2/docs/contributing/integrations)
* [FAQ](/v0.2/docs/contributing/faq)
* [](/v0.2/)
* Contributing
* Contribute Integrations
On this page
Contribute Integrations
=======================
TODO rewrite for JS[](#todo-rewrite-for-js "Direct link to TODO rewrite for JS")
---------------------------------------------------------------------------------
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/v0.2/docs/contributing/code/).
There are a few different places you can contribute integrations for LangChain:
* **Community**: For lighter-weight integrations that are primarily maintained by LangChain and the Open Source Community.
* **Partner Packages**: For independent packages that are co-maintained by LangChain and a partner.
For the most part, new integrations should be added to the Community package. Partner packages require more maintenance as separate packages, so please confirm with the LangChain team before creating a new partner package.
In the following sections, we'll walk through how to contribute to each of these packages from a fake company, `Parrot Link AI`.
Community package[](#community-package "Direct link to Community package")
---------------------------------------------------------------------------
The `@langchain/community` package is in `libs/community` and contains most integrations.
It can be installed with `pip install langchain-community`, and exported members can be imported with code like
from langchain_community.chat_models import ChatParrotLinkfrom langchain_community.llms import ParrotLinkLLMfrom langchain_community.vectorstores import ParrotLinkVectorStore
The `community` package relies on manually-installed dependent packages, so you will see errors if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code:
from langchain_core.language_models.chat_models import BaseChatModelclass ChatParrotLink(BaseChatModel): """ChatParrotLink chat model. Example: .. code-block:: python from langchain_community.chat_models import ChatParrotLink model = ChatParrotLink() """ ...
And we would write tests in:
* Unit tests: `libs/community/tests/unit_tests/chat_models/test_parrot_link.py`
* Integration tests: `libs/community/tests/integration_tests/chat_models/test_parrot_link.py`
And add documentation to:
* `docs/docs/integrations/chat/parrot_link.ipynb`
Partner package in LangChain repo[](#partner-package-in-langchain-repo "Direct link to Partner package in LangChain repo")
---------------------------------------------------------------------------------------------------------------------------
Partner packages can be hosted in the `LangChain` monorepo or in an external repo.
Partner package in the `LangChain` repo is placed in `libs/partners/{partner}` and the package source code is in `libs/partners/{partner}/langchain_{partner}`.
A package is installed by users with `pip install langchain-{partner}`, and the package members can be imported with code like:
from langchain_{partner} import X
### Set up a new package[](#set-up-a-new-package "Direct link to Set up a new package")
To set up a new partner package, use the latest version of the LangChain CLI. You can install or update it with:
pip install -U langchain-cli
Let's say you want to create a new partner package working for a company called Parrot Link AI.
Then, run the following command to create a new partner package:
cd libs/partnerslangchain-cli integration new> Name: parrot-link> Name of integration in PascalCase [ParrotLink]: ParrotLink
This will create a new package in `libs/partners/parrot-link` with the following structure:
libs/partners/parrot-link/ langchain_parrot_link/ # folder containing your package ... tests/ ... docs/ # bootstrapped docs notebooks, must be moved to /docs in monorepo root ... scripts/ # scripts for CI ... LICENSE README.md # fill out with information about your package Makefile # default commands for CI pyproject.toml # package metadata, mostly managed by Poetry poetry.lock # package lockfile, managed by Poetry .gitignore
### Implement your package[](#implement-your-package "Direct link to Implement your package")
First, add any dependencies your package needs, such as your company's SDK:
poetry add parrot-link-sdk
If you need separate dependencies for type checking, you can add them to the `typing` group with:
poetry add --group typing types-parrot-link-sdk
Then, implement your package in `libs/partners/parrot-link/langchain_parrot_link`.
By default, this will include stubs for a Chat Model, an LLM, and/or a Vector Store. You should delete any of the files you won't use and remove them from `__init__.py`.
### Write Unit and Integration Tests[](#write-unit-and-integration-tests "Direct link to Write Unit and Integration Tests")
Some basic tests are presented in the `tests/` directory. You should add more tests to cover your package's functionality.
For information on running and implementing tests, see the [Testing guide](/v0.2/docs/contributing/testing/).
### Write documentation[](#write-documentation "Direct link to Write documentation")
Documentation is generated from Jupyter notebooks in the `docs/` directory. You should place the notebooks with examples to the relevant `docs/docs/integrations` directory in the monorepo root.
### (If Necessary) Deprecate community integration[](#if-necessary-deprecate-community-integration "Direct link to (If Necessary) Deprecate community integration")
Note: this is only necessary if you're migrating an existing community integration into a partner package. If the component you're integrating is net-new to LangChain (i.e. not already in the `community` package), you can skip this step.
Let's pretend we migrated our `ChatParrotLink` chat model from the community package to the partner package. We would need to deprecate the old model in the community package.
We would do that by adding a `@deprecated` decorator to the old model as follows, in `libs/community/langchain_community/chat_models/parrot_link.py`.
Before our change, our chat model might look like this:
class ChatParrotLink(BaseChatModel): ...
After our change, it would look like this:
from langchain_core._api.deprecation import deprecated@deprecated( since="0.0.<next community version>", removal="0.2.0", alternative_import="langchain_parrot_link.ChatParrotLink")class ChatParrotLink(BaseChatModel): ...
You should do this for _each_ component that you're migrating to the partner package.
### Additional steps[](#additional-steps "Direct link to Additional steps")
Contributor steps:
* Add secret names to manual integrations workflow in `.github/workflows/_integration_test.yml`
* Add secrets to release workflow (for pre-release testing) in `.github/workflows/_release.yml`
Maintainer steps (Contributors should **not** do these):
* set up pypi and test pypi projects
* add credential secrets to Github Actions
* add package to conda-forge
Partner package in external repo[](#partner-package-in-external-repo "Direct link to Partner package in external repo")
------------------------------------------------------------------------------------------------------------------------
Partner packages in external repos must be coordinated between the LangChain team and the partner organization to ensure that they are maintained and updated.
If you're interested in creating a partner package in an external repo, please start with one in the LangChain repo, and then reach out to the LangChain team to discuss how to move it to an external repo.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Style guide
](/v0.2/docs/contributing/documentation/style_guide)[
Next
FAQ
](/v0.2/docs/contributing/faq)
* [TODO rewrite for JS](#todo-rewrite-for-js)
* [Community package](#community-package)
* [Partner package in LangChain repo](#partner-package-in-langchain-repo)
* [Set up a new package](#set-up-a-new-package)
* [Implement your package](#implement-your-package)
* [Write Unit and Integration Tests](#write-unit-and-integration-tests)
* [Write documentation](#write-documentation)
* [(If Necessary) Deprecate community integration](#if-necessary-deprecate-community-integration)
* [Additional steps](#additional-steps)
* [Partner package in external repo](#partner-package-in-external-repo)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/chains/document/refine | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
On this page
Refine
======
The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context. The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain. There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
![refine_diagram](/v0.1/assets/images/refine-a70f30dd7ada6fe5e3fcc40dd70de037.jpg)
Here's how it looks in practice:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { loadQARefineChain } from "langchain/chains";import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { MemoryVectorStore } from "langchain/vectorstores/memory";// Create the models and chainconst embeddings = new OpenAIEmbeddings();const model = new OpenAI({ temperature: 0 });const chain = loadQARefineChain(model);// Load the documents and create the vector storeconst loader = new TextLoader("./state_of_the_union.txt");const docs = await loader.loadAndSplit();const store = await MemoryVectorStore.fromDocuments(docs, embeddings);// Select the relevant documentsconst question = "What did the president say about Justice Breyer";const relevantDocs = await store.similaritySearch(question);// Call the chainconst res = await chain.invoke({ input_documents: relevantDocs, question,});console.log(res);/*{ output_text: '\n' + '\n' + "The president said that Justice Stephen Breyer has dedicated his life to serve this country and thanked him for his service. He also mentioned that Judge Ketanji Brown Jackson will continue Justice Breyer's legacy of excellence, and that the constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before. He emphasized the importance of protecting access to health care, preserving a woman's right to choose, and advancing maternal health care in America. He also expressed his support for the LGBTQ+ community, and his commitment to protecting their rights, including offering a Unity Agenda for the Nation to beat the opioid epidemic, increase funding for prevention, treatment, harm reduction, and recovery, and strengthen the Violence Against Women Act."}*/
#### API Reference:
* [loadQARefineChain](https://api.js.langchain.com/functions/langchain_chains.loadQARefineChain.html) from `langchain/chains`
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
Prompt customization[](#prompt-customization "Direct link to Prompt customization")
------------------------------------------------------------------------------------
You may want to tweak the behavior of a step by changing the prompt. Here's an example of how to do that:
import { loadQARefineChain } from "langchain/chains";import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { PromptTemplate } from "@langchain/core/prompts";export const questionPromptTemplateString = `Context information is below.---------------------{context}---------------------Given the context information and no prior knowledge, answer the question: {question}`;const questionPrompt = new PromptTemplate({ inputVariables: ["context", "question"], template: questionPromptTemplateString,});const refinePromptTemplateString = `The original question is as follows: {question}We have provided an existing answer: {existing_answer}We have the opportunity to refine the existing answer(only if needed) with some more context below.------------{context}------------Given the new context, refine the original answer to better answer the question.You must provide a response, either original answer or refined answer.`;const refinePrompt = new PromptTemplate({ inputVariables: ["question", "existing_answer", "context"], template: refinePromptTemplateString,});// Create the models and chainconst embeddings = new OpenAIEmbeddings();const model = new OpenAI({ temperature: 0 });const chain = loadQARefineChain(model, { questionPrompt, refinePrompt,});// Load the documents and create the vector storeconst loader = new TextLoader("./state_of_the_union.txt");const docs = await loader.loadAndSplit();const store = await MemoryVectorStore.fromDocuments(docs, embeddings);// Select the relevant documentsconst question = "What did the president say about Justice Breyer";const relevantDocs = await store.similaritySearch(question);// Call the chainconst res = await chain.invoke({ input_documents: relevantDocs, question,});console.log(res);/*{ output_text: '\n' + '\n' + "The president said that Justice Stephen Breyer has dedicated his life to serve this country and thanked him for his service. He also mentioned that Judge Ketanji Brown Jackson will continue Justice Breyer's legacy of excellence, and that the constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before. He emphasized the importance of protecting access to health care, preserving a woman's right to choose, and advancing maternal health care in America. He also expressed his support for the LGBTQ+ community, and his commitment to protecting their rights, including offering a Unity Agenda for the Nation to beat the opioid epidemic, increase funding for prevention, treatment, harm reduction, and recovery, and strengthen the Violence Against Women Act."}*/
#### API Reference:
* [loadQARefineChain](https://api.js.langchain.com/functions/langchain_chains.loadQARefineChain.html) from `langchain/chains`
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
* [Prompt customization](#prompt-customization)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/text_embedding/openai | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![🦜️🔗 Langchain](/v0.2/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Alibaba Tongyi](/v0.2/docs/integrations/text_embedding/alibaba_tongyi)
* [Azure OpenAI](/v0.2/docs/integrations/text_embedding/azure_openai)
* [Baidu Qianfan](/v0.2/docs/integrations/text_embedding/baidu_qianfan)
* [Bedrock](/v0.2/docs/integrations/text_embedding/bedrock)
* [Cloudflare Workers AI](/v0.2/docs/integrations/text_embedding/cloudflare_ai)
* [Cohere](/v0.2/docs/integrations/text_embedding/cohere)
* [Fireworks](/v0.2/docs/integrations/text_embedding/fireworks)
* [Google AI](/v0.2/docs/integrations/text_embedding/google_generativeai)
* [Google PaLM](/v0.2/docs/integrations/text_embedding/google_palm)
* [Google Vertex AI](/v0.2/docs/integrations/text_embedding/google_vertex_ai)
* [Gradient AI](/v0.2/docs/integrations/text_embedding/gradient_ai)
* [HuggingFace Inference](/v0.2/docs/integrations/text_embedding/hugging_face_inference)
* [Llama CPP](/v0.2/docs/integrations/text_embedding/llama_cpp)
* [Minimax](/v0.2/docs/integrations/text_embedding/minimax)
* [Mistral AI](/v0.2/docs/integrations/text_embedding/mistralai)
* [Nomic](/v0.2/docs/integrations/text_embedding/nomic)
* [Ollama](/v0.2/docs/integrations/text_embedding/ollama)
* [OpenAI](/v0.2/docs/integrations/text_embedding/openai)
* [Prem AI](/v0.2/docs/integrations/text_embedding/premai)
* [TensorFlow](/v0.2/docs/integrations/text_embedding/tensorflow)
* [Together AI](/v0.2/docs/integrations/text_embedding/togetherai)
* [HuggingFace Transformers](/v0.2/docs/integrations/text_embedding/transformers)
* [Voyage AI](/v0.2/docs/integrations/text_embedding/voyageai)
* [ZhipuAI](/v0.2/docs/integrations/text_embedding/zhipuai)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* OpenAI
On this page
OpenAI
======
The `OpenAIEmbeddings` class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing `stripNewLines: false` to the constructor.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048 model: "text-embedding-3-large",});
If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` to your OpenAI organization id, or pass it in as `organization` when initializing the model.
Specifying dimensions[](#specifying-dimensions "Direct link to Specifying dimensions")
---------------------------------------------------------------------------------------
With the `text-embedding-3` class of models, you can specify the size of the embeddings you want returned. For example by default `text-embedding-3-large` returns embeddings of dimension 3072:
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-large",});const vectors = await embeddings.embedDocuments(["some text"]);console.log(vectors[0].length);
3072
But by passing in `dimensions: 1024` we can reduce the size of our embeddings to 1024:
const embeddings1024 = new OpenAIEmbeddings({ model: "text-embedding-3-large", dimensions: 1024,});const vectors2 = await embeddings1024.embedDocuments(["some text"]);console.log(vectors2[0].length);
1024
Custom URLs[](#custom-urls "Direct link to Custom URLs")
---------------------------------------------------------
You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this:
const model = new OpenAIEmbeddings({ configuration: { baseURL: "https://your_custom_url.com", },});
You can also pass other `ClientOptions` parameters accepted by the official SDK.
If you are hosting on Azure OpenAI, see the [dedicated page instead](/v0.2/docs/integrations/text_embedding/azure_openai).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Ollama
](/v0.2/docs/integrations/text_embedding/ollama)[
Next
Prem AI
](/v0.2/docs/integrations/text_embedding/premai)
* [Specifying dimensions](#specifying-dimensions)
* [Custom URLs](#custom-urls)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/docs/integrations/retrievers/tavily | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Knowledge Bases for Amazon Bedrock](/v0.1/docs/integrations/retrievers/bedrock-knowledge-bases/)
* [Chaindesk Retriever](/v0.1/docs/integrations/retrievers/chaindesk-retriever/)
* [ChatGPT Plugin Retriever](/v0.1/docs/integrations/retrievers/chatgpt-retriever-plugin/)
* [Dria Retriever](/v0.1/docs/integrations/retrievers/dria/)
* [Exa Search](/v0.1/docs/integrations/retrievers/exa/)
* [HyDE Retriever](/v0.1/docs/integrations/retrievers/hyde/)
* [Amazon Kendra Retriever](/v0.1/docs/integrations/retrievers/kendra-retriever/)
* [Metal Retriever](/v0.1/docs/integrations/retrievers/metal-retriever/)
* [Supabase Hybrid Search](/v0.1/docs/integrations/retrievers/supabase-hybrid/)
* [Tavily Search API](/v0.1/docs/integrations/retrievers/tavily/)
* [Time-Weighted Retriever](/v0.1/docs/integrations/retrievers/time-weighted-retriever/)
* [Vector Store](/v0.1/docs/integrations/retrievers/vectorstore/)
* [Vespa Retriever](/v0.1/docs/integrations/retrievers/vespa-retriever/)
* [Zep Retriever](/v0.1/docs/integrations/retrievers/zep-retriever/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* Tavily Search API
On this page
Tavily Search API
=================
[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.
Usage[](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
You will need to populate a `TAVILY_API_KEY` environment variable with your Tavily API key or pass it into the constructor as `apiKey`.
For a full list of allowed arguments, see [the official documentation](https://app.tavily.com/documentation/api). You can also pass any param to the SDK via a `kwargs` object.
import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";const retriever = new TavilySearchAPIRetriever({ k: 3,});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer in the 2022 State of the Union?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: `Shy Justice Br eyer. During his remarks, the president paid tribute to retiring Supreme Court Justice Stephen Breyer. "Tonight, I'd like to honor someone who dedicated his life to...`, metadata: [Object] }, Document { pageContent: 'Fact Check. Ukraine. 56 Posts. Sort by. 10:16 p.m. ET, March 1, 2022. Biden recognized outgoing Supreme Court Justice Breyer during his speech. President Biden recognized outgoing...', metadata: [Object] }, Document { pageContent: `In his State of the Union address on March 1, Biden thanked Breyer for his service. "I'd like to honor someone who has dedicated his life to serve this country: Justice Breyer — an Army...`, metadata: [Object] } ] }*/
#### API Reference:
* [TavilySearchAPIRetriever](https://api.js.langchain.com/classes/langchain_community_retrievers_tavily_search_api.TavilySearchAPIRetriever.html) from `@langchain/community/retrievers/tavily_search_api`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Supabase Hybrid Search
](/v0.1/docs/integrations/retrievers/supabase-hybrid/)[
Next
Time-Weighted Retriever
](/v0.1/docs/integrations/retrievers/time-weighted-retriever/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/docs/modules/data_connection/document_transformers/recursive_text_splitter | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Split by character](/v0.1/docs/modules/data_connection/document_transformers/character_text_splitter/)
* [Split code and markup](/v0.1/docs/modules/data_connection/document_transformers/code_splitter/)
* [Contextual chunk headers](/v0.1/docs/modules/data_connection/document_transformers/contextual_chunk_headers/)
* [Custom text splitters](/v0.1/docs/modules/data_connection/document_transformers/custom_text_splitter/)
* [Recursively split by character](/v0.1/docs/modules/data_connection/document_transformers/recursive_text_splitter/)
* [TokenTextSplitter](/v0.1/docs/modules/data_connection/document_transformers/token_splitter/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* Recursively split by character
Recursively split by character
==============================
This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list of separators is `["\n\n", "\n", " ", ""]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.
1. How the text is split: by list of characters
2. How the chunk size is measured: by number of characters
Important parameters to know here are `chunkSize` and `chunkOverlap`. `chunkSize` controls the max size (in terms of number of characters) of the final documents. `chunkOverlap` specifies how much overlap there should be between chunks. This is often helpful to make sure that the text isn't split weirdly. In the example below we set these values to be small (for illustration purposes), but in practice they default to `1000` and `200` respectively.
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.This is a weird text to write, but gotta test the splittingggg some how.\n\nBye!\n\n-H.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 1,});const output = await splitter.createDocuments([text]);
You'll note that in the above example we are splitting a raw text string and getting back a list of documents. We can also split documents directly.
import { Document } from "langchain/document";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const text = `Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.This is a weird text to write, but gotta test the splittingggg some how.\n\nBye!\n\n-H.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 10, chunkOverlap: 1,});const docOutput = await splitter.splitDocuments([ new Document({ pageContent: text }),]);
You can customize the `RecursiveCharacterTextSplitter` with arbitrary separators by passing a `separators` parameter like this:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { Document } from "@langchain/core/documents";const text = `Some other considerations include:- Do you deploy your backend and frontend together, or separately?- Do you deploy your backend co-located with your database, or separately?**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.## Deployment OptionsSee below for a list of deployment options for your LangChain app. If you don't see your preferred option, please get in touch and we can add it to this list.`;const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 50, chunkOverlap: 1, separators: ["|", "##", ">", "-"],});const docOutput = await splitter.splitDocuments([ new Document({ pageContent: text }),]);console.log(docOutput);/* [ Document { pageContent: 'Some other considerations include:', metadata: { loc: [Object] } }, Document { pageContent: '- Do you deploy your backend and frontend together', metadata: { loc: [Object] } }, Document { pageContent: 'r, or separately?', metadata: { loc: [Object] } }, Document { pageContent: '- Do you deploy your backend co', metadata: { loc: [Object] } }, Document { pageContent: '-located with your database, or separately?\n\n**Pro', metadata: { loc: [Object] } }, Document { pageContent: 'oduction Support:** As you move your LangChains in', metadata: { loc: [Object] } }, Document { pageContent: "nto production, we'd love to offer more hands", metadata: { loc: [Object] } }, Document { pageContent: '-on support.\nFill out [this form](https://airtable', metadata: { loc: [Object] } }, Document { pageContent: 'e.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to shar', metadata: { loc: [Object] } }, Document { pageContent: "re more about what you're building, and our team w", metadata: { loc: [Object] } }, Document { pageContent: 'will get in touch.', metadata: { loc: [Object] } }, Document { pageContent: '#', metadata: { loc: [Object] } }, Document { pageContent: '# Deployment Options\n' + '\n' + "See below for a list of deployment options for your LangChain app. If you don't see your preferred option, please get in touch and we can add it to this list.", metadata: { loc: [Object] } } ]*/
#### API Reference:
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Custom text splitters
](/v0.1/docs/modules/data_connection/document_transformers/custom_text_splitter/)[
Next
TokenTextSplitter
](/v0.1/docs/modules/data_connection/document_transformers/token_splitter/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |
https://js.langchain.com/docs/modules/data_connection/retrievers/contextual_compression#embeddingsfilter | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![🦜️🔗 Langchain](/v0.1/img/brand/wordmark.png)![🦜️🔗 Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[🦜🔗](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Custom retrievers](/v0.1/docs/modules/data_connection/retrievers/custom/)
* [Contextual compression](/v0.1/docs/modules/data_connection/retrievers/contextual_compression/)
* [Matryoshka Retriever](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/)
* [MultiQuery Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/)
* [MultiVector Retriever](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/)
* [Parent Document Retriever](/v0.1/docs/modules/data_connection/retrievers/parent-document-retriever/)
* [Self-querying](/v0.1/docs/modules/data_connection/retrievers/self_query/)
* [Similarity Score Threshold](/v0.1/docs/modules/data_connection/retrievers/similarity-score-threshold-retriever/)
* [Time-weighted vector store retriever](/v0.1/docs/modules/data_connection/retrievers/time_weighted_vectorstore/)
* [Vector store-backed retriever](/v0.1/docs/modules/data_connection/retrievers/vectorstore/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* Contextual compression
Contextual compression
======================
One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.
Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.
To use the Contextual Compression Retriever, you'll need:
* a base retriever
* a Document Compressor
The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.
Using a vanilla vector store retriever[](#using-a-vanilla-vector-store-retriever "Direct link to Using a vanilla vector store retriever")
------------------------------------------------------------------------------------------------------------------------------------------
Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). Given an example question, our retriever returns one or two relevant docs and a few irrelevant docs, and even the relevant docs have a lot of irrelevant information in them. To extract all the context we can, we use an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import * as fs from "fs";import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { ContextualCompressionRetriever } from "langchain/retrievers/contextual_compression";import { LLMChainExtractor } from "langchain/retrievers/document_compressors/chain_extract";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct",});const baseCompressor = LLMChainExtractor.fromLLM(model);const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const retriever = new ContextualCompressionRetriever({ baseCompressor, baseRetriever: vectorStore.asRetriever(),});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: 'One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata: [Object] }, Document { pageContent: '"Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', metadata: [Object] }, Document { pageContent: 'The onslaught of state laws targeting transgender Americans and their families is wrong.', metadata: [Object] } ] }*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [ContextualCompressionRetriever](https://api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html) from `langchain/retrievers/contextual_compression`
* [LLMChainExtractor](https://api.js.langchain.com/classes/langchain_retrievers_document_compressors_chain_extract.LLMChainExtractor.html) from `langchain/retrievers/document_compressors/chain_extract`
`EmbeddingsFilter`[](#embeddingsfilter "Direct link to embeddingsfilter")
--------------------------------------------------------------------------
Making an extra LLM call over each retrieved document is expensive and slow. The `EmbeddingsFilter` provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
This is most useful for non-vector store retrievers where we may not have control over the returned chunk size, or as part of a pipeline, outlined below.
Here's an example:
import * as fs from "fs";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";import { ContextualCompressionRetriever } from "langchain/retrievers/contextual_compression";import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter";const baseCompressor = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), similarityThreshold: 0.8,});const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const retriever = new ContextualCompressionRetriever({ baseCompressor, baseRetriever: vectorStore.asRetriever(),});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: 'And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. \n' + '\n' + 'A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n' + '\n' + 'And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n' + '\n' + 'We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n' + '\n' + 'We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n' + '\n' + 'We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.', metadata: [Object] }, Document { pageContent: 'In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n' + '\n' + 'We cannot let this happen. \n' + '\n' + 'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n' + '\n' + 'Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n' + '\n' + 'One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n' + '\n' + 'And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata: [Object] } ] }*/
#### API Reference:
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ContextualCompressionRetriever](https://api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html) from `langchain/retrievers/contextual_compression`
* [EmbeddingsFilter](https://api.js.langchain.com/classes/langchain_retrievers_document_compressors_embeddings_filter.EmbeddingsFilter.html) from `langchain/retrievers/document_compressors/embeddings_filter`
Stringing compressors and document transformers together[](#stringing-compressors-and-document-transformers-together "Direct link to Stringing compressors and document transformers together")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitters` can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsFilter` can be used to filter out documents based on similarity of the individual chunks to the input query.
Below we create a compressor pipeline by first splitting raw webpage documents retrieved from the [Tavily web search API retriever](/v0.1/docs/integrations/retrievers/tavily/) into smaller chunks, then filtering based on relevance to the query. The result is smaller chunks that are semantically similar to the input query. This skips the need to add documents to a vector store to perform similarity search, which can be useful for one-off use cases:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { OpenAIEmbeddings } from "@langchain/openai";import { ContextualCompressionRetriever } from "langchain/retrievers/contextual_compression";import { EmbeddingsFilter } from "langchain/retrievers/document_compressors/embeddings_filter";import { TavilySearchAPIRetriever } from "@langchain/community/retrievers/tavily_search_api";import { DocumentCompressorPipeline } from "langchain/retrievers/document_compressors";const embeddingsFilter = new EmbeddingsFilter({ embeddings: new OpenAIEmbeddings(), similarityThreshold: 0.8, k: 5,});const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 200, chunkOverlap: 0,});const compressorPipeline = new DocumentCompressorPipeline({ transformers: [textSplitter, embeddingsFilter],});const baseRetriever = new TavilySearchAPIRetriever({ includeRawContent: true,});const retriever = new ContextualCompressionRetriever({ baseCompressor: compressorPipeline, baseRetriever,});const retrievedDocs = await retriever.invoke( "What did the speaker say about Justice Breyer in the 2022 State of the Union?");console.log({ retrievedDocs });/* { retrievedDocs: [ Document { pageContent: 'Justice Stephen Breyer talks to President Joe Biden ahead of the State of the Union address on Tuesday. (jabin botsford/Agence France-Presse/Getty Images)', metadata: [Object] }, Document { pageContent: 'President Biden recognized outgoing US Supreme Court Justice Stephen Breyer during his State of the Union on Tuesday.', metadata: [Object] }, Document { pageContent: 'What we covered here\n' + 'Biden recognized outgoing Supreme Court Justice Breyer during his speech', metadata: [Object] }, Document { pageContent: 'States Supreme Court. Justice Breyer, thank you for your service,” the president said.', metadata: [Object] }, Document { pageContent: 'Court," Biden said. "Justice Breyer, thank you for your service."', metadata: [Object] } ] }*/
#### API Reference:
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ContextualCompressionRetriever](https://api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html) from `langchain/retrievers/contextual_compression`
* [EmbeddingsFilter](https://api.js.langchain.com/classes/langchain_retrievers_document_compressors_embeddings_filter.EmbeddingsFilter.html) from `langchain/retrievers/document_compressors/embeddings_filter`
* [TavilySearchAPIRetriever](https://api.js.langchain.com/classes/langchain_community_retrievers_tavily_search_api.TavilySearchAPIRetriever.html) from `@langchain/community/retrievers/tavily_search_api`
* [DocumentCompressorPipeline](https://api.js.langchain.com/classes/langchain_retrievers_document_compressors.DocumentCompressorPipeline.html) from `langchain/retrievers/document_compressors`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Custom retrievers
](/v0.1/docs/modules/data_connection/retrievers/custom/)[
Next
Matryoshka Retriever
](/v0.1/docs/modules/data_connection/retrievers/matryoshka_retriever/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright © 2024 LangChain, Inc. |