url
stringlengths 25
141
| content
stringlengths 2.14k
402k
|
---|---|
https://js.langchain.com/v0.1/docs/integrations/llms/google_palm/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [AI21](/v0.1/docs/integrations/llms/ai21/)
* [AlephAlpha](/v0.1/docs/integrations/llms/aleph_alpha/)
* [AWS SageMakerEndpoint](/v0.1/docs/integrations/llms/aws_sagemaker/)
* [Azure OpenAI](/v0.1/docs/integrations/llms/azure/)
* [Bedrock](/v0.1/docs/integrations/llms/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/llms/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/llms/cohere/)
* [Fake LLM](/v0.1/docs/integrations/llms/fake/)
* [Fireworks](/v0.1/docs/integrations/llms/fireworks/)
* [Friendli](/v0.1/docs/integrations/llms/friendli/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/llms/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/llms/gradient_ai/)
* [HuggingFaceInference](/v0.1/docs/integrations/llms/huggingface_inference/)
* [Llama CPP](/v0.1/docs/integrations/llms/llama_cpp/)
* [NIBittensor](/v0.1/docs/integrations/llms/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/llms/ollama/)
* [OpenAI](/v0.1/docs/integrations/llms/openai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/llms/prompt_layer_openai/)
* [RaycastAI](/v0.1/docs/integrations/llms/raycast/)
* [Replicate](/v0.1/docs/integrations/llms/replicate/)
* [Together AI](/v0.1/docs/integrations/llms/togetherai/)
* [WatsonX AI](/v0.1/docs/integrations/llms/watsonx_ai/)
* [Writer](/v0.1/docs/integrations/llms/writer/)
* [YandexGPT](/v0.1/docs/integrations/llms/yandex/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* (Legacy) Google PaLM/VertexAI
On this page
Google PaLM
===========
note
This integration does not support `gemini-*` models. Check [Google AI](/v0.1/docs/integrations/chat/google_generativeai/) or [VertexAI](/v0.1/docs/integrations/llms/google_vertex_ai/).
The [Google PaLM API](https://developers.generativeai.google/products/palm) can be integrated by first installing the required packages:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install google-auth-library @google-ai/generativelanguage @langchain/community
yarn add google-auth-library @google-ai/generativelanguage @langchain/community
pnpm add google-auth-library @google-ai/generativelanguage @langchain/community
Create an **API key** from [Google MakerSuite](https://makersuite.google.com/app/apikey). You can then set the key as `GOOGLE_PALM_API_KEY` environment variable or pass it as `apiKey` parameter while instantiating the model.
import { GooglePaLM } from "@langchain/community/llms/googlepalm";export const run = async () => { const model = new GooglePaLM({ apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY` // other params temperature: 1, // OPTIONAL model: "models/text-bison-001", // OPTIONAL maxOutputTokens: 1024, // OPTIONAL topK: 40, // OPTIONAL topP: 3, // OPTIONAL safetySettings: [ // OPTIONAL { category: "HARM_CATEGORY_DANGEROUS", threshold: "BLOCK_MEDIUM_AND_ABOVE", }, ], stopSequences: ["stop"], // OPTIONAL }); const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?" ); console.log({ res });};
#### API Reference:
* [GooglePaLM](https://api.js.langchain.com/classes/langchain_community_llms_googlepalm.GooglePaLM.html) from `@langchain/community/llms/googlepalm`
GooglePaLM
==========
Langchain.js supports two different authentication methods based on whether you're running in a Node.js environment or a web environment.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
### Node.js[β](#nodejs "Direct link to Node.js")
To call Vertex AI models in Node, you'll need to install [Google's official auth client](https://www.npmjs.com/package/google-auth-library) as a peer dependency.
You should make sure the Vertex AI API is enabled for the relevant project and that you've authenticated to Google Cloud using one of these methods:
* You are logged into an account (using `gcloud auth application-default login`) permitted to that project.
* You are running on a machine using a service account that is permitted to the project.
* You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file.
* npm
* Yarn
* pnpm
npm install google-auth-library
yarn add google-auth-library
pnpm add google-auth-library
### Web[β](#web "Direct link to Web")
To call Vertex AI models in web environments (like Edge functions), you'll need to install the [`web-auth-library`](https://github.com/kriasoft/web-auth-library) pacakge as a peer dependency:
* npm
* Yarn
* pnpm
npm install web-auth-library
yarn add web-auth-library
pnpm add web-auth-library
Then, you'll need to add your service account credentials directly as a `GOOGLE_VERTEX_AI_WEB_CREDENTIALS` environment variable:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}
You can also pass your credentials directly in code like this:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { GoogleVertexAI } from "@langchain/community/llms/googlevertexai";const model = new GoogleVertexAI({ authOptions: { credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...}, },});
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Several models are available and can be specified by the `model` attribute in the constructor. These include:
* text-bison (default)
* text-bison-32k
* code-gecko
* code-bison
import { GoogleVertexAI } from "@langchain/community/llms/googlevertexai";// Or, if using the web entrypoint:// import { GoogleVertexAI } from "@langchain/community/llms/googlevertexai/web";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ temperature: 0.7,});const res = await model.invoke( "What would be a good company name for a company that makes colorful socks?");console.log({ res });
#### API Reference:
* [GoogleVertexAI](https://api.js.langchain.com/classes/langchain_community_llms_googlevertexai.GoogleVertexAI.html) from `@langchain/community/llms/googlevertexai`
Google also has separate models for their "Codey" code generation models.
The "code-gecko" model is useful for code completion:
import { GoogleVertexAI } from "@langchain/community/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-gecko",});const res = await model.invoke("for (let co=0;");console.log({ res });
#### API Reference:
* [GoogleVertexAI](https://api.js.langchain.com/classes/langchain_community_llms_googlevertexai.GoogleVertexAI.html) from `@langchain/community/llms/googlevertexai`
While the "code-bison" model is better at larger code generation based on a text prompt:
import { GoogleVertexAI } from "@langchain/community/llms/googlevertexai";/* * Before running this, you should make sure you have created a * Google Cloud Project that is permitted to the Vertex AI API. * * You will also need permission to access this project / API. * Typically, this is done in one of three ways: * - You are logged into an account permitted to that project. * - You are running this on a machine using a service account permitted to * the project. * - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the * path of a credentials file for a service account permitted to the project. */const model = new GoogleVertexAI({ model: "code-bison", maxOutputTokens: 2048,});const res = await model.invoke( "A Javascript function that counts from 1 to 10.");console.log({ res });
#### API Reference:
* [GoogleVertexAI](https://api.js.langchain.com/classes/langchain_community_llms_googlevertexai.GoogleVertexAI.html) from `@langchain/community/llms/googlevertexai`
### Streaming[β](#streaming "Direct link to Streaming")
Streaming in multiple chunks is supported for faster responses:
import { GoogleVertexAI } from "@langchain/community/llms/googlevertexai";const model = new GoogleVertexAI({ temperature: 0.7,});const stream = await model.stream( "What would be a good company name for a company that makes colorful socks?");for await (const chunk of stream) { console.log("\n---------\nChunk:\n---------\n", chunk);}/* --------- Chunk: --------- 1. Toe-tally Awesome Socks 2. The Sock Drawer 3. Happy Feet 4. --------- Chunk: --------- Sock It to Me 5. Crazy Color Socks 6. Wild and Wacky Socks 7. Fu --------- Chunk: --------- nky Feet 8. Mismatched Socks 9. Rainbow Socks 10. Sole Mates --------- Chunk: --------- */
#### API Reference:
* [GoogleVertexAI](https://api.js.langchain.com/classes/langchain_community_llms_googlevertexai.GoogleVertexAI.html) from `@langchain/community/llms/googlevertexai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Friendli
](/v0.1/docs/integrations/llms/friendli/)[
Next
Google Vertex AI
](/v0.1/docs/integrations/llms/google_vertex_ai/)
* [Setup](#setup)
* [Node.js](#nodejs)
* [Web](#web)
* [Usage](#usage)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/ecosystem/integrations/databerry/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* [Databerry](/v0.1/docs/ecosystem/integrations/databerry/)
* [Helicone](/v0.1/docs/ecosystem/integrations/helicone/)
* [Lunary](/v0.1/docs/ecosystem/integrations/lunary/)
* [Google MakerSuite](/v0.1/docs/ecosystem/integrations/makersuite/)
* [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/)
* [Integrating with LangServe](/v0.1/docs/ecosystem/langserve/)
* [LangSmith](https://docs.smith.langchain.com)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* Databerry
On this page
Databerry
=========
This page covers how to use the [Databerry](https://databerry.ai) within LangChain.
What is Databerry?[β](#what-is-databerry "Direct link to What is Databerry?")
-----------------------------------------------------------------------------
Databerry is an [open source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
![Databerry](/v0.1/assets/images/DataberryDashboard-098b0319db0dd6665f993f77f4db822f.png)
Quick start[β](#quick-start "Direct link to Quick start")
---------------------------------------------------------
Retrieving documents stored in Databerry from LangChain is very easy!
import { DataberryRetriever } from "langchain/retrievers/databerry";const retriever = new DataberryRetriever({ datastoreUrl: "https://api.databerry.ai/query/clg1xg2h80000l708dymr0fxc", apiKey: "DATABERRY_API_KEY", // optional: needed for private datastores topK: 8, // optional: default value is 3});// Create a chain that uses the OpenAI LLM and Databerry retriever.const chain = RetrievalQAChain.fromLLM(model, retriever);// Call the chain with a query.const res = await chain.call({ query: "What's Databerry?",});console.log({ res });/*{ res: { text: 'Databerry provides a user-friendly solution to quickly setup a semantic search system over your personal data without any technical knowledge.' }}*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Integrations
](/v0.1/docs/ecosystem/integrations/)[
Next
Helicone
](/v0.1/docs/ecosystem/integrations/helicone/)
* [What is Databerry?](#what-is-databerry)
* [Quick start](#quick-start)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/ecosystem/integrations/helicone/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* [Databerry](/v0.1/docs/ecosystem/integrations/databerry/)
* [Helicone](/v0.1/docs/ecosystem/integrations/helicone/)
* [Lunary](/v0.1/docs/ecosystem/integrations/lunary/)
* [Google MakerSuite](/v0.1/docs/ecosystem/integrations/makersuite/)
* [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/)
* [Integrating with LangServe](/v0.1/docs/ecosystem/langserve/)
* [LangSmith](https://docs.smith.langchain.com)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* Helicone
On this page
Helicone
========
This page covers how to use the [Helicone](https://helicone.ai) within LangChain.
What is Helicone?[β](#what-is-helicone "Direct link to What is Helicone?")
--------------------------------------------------------------------------
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](/v0.1/assets/images/HeliconeDashboard-bc06f9888dbb03ff98d894fe9bec2b29.png)
Quick start[β](#quick-start "Direct link to Quick start")
---------------------------------------------------------
With your LangChain environment you can just add the following parameter.
const model = new OpenAI( {}, { basePath: "https://oai.hconeai.com/v1", });const res = await model.invoke("What is a helicone?");
Now head over to [helicone.ai](https://www.helicone.ai/) to create your account, and add your OpenAI API key within our dashboard to view your logs.
![Helicone](/v0.1/assets/images/HeliconeKeys-9ff580101e3a63ee05e2fa67b8def03c.png)
How to enable Helicone caching[β](#how-to-enable-helicone-caching "Direct link to How to enable Helicone caching")
------------------------------------------------------------------------------------------------------------------
const model = new OpenAI( {}, { basePath: "https://oai.hconeai.com/v1", baseOptions: { headers: { "Helicone-Cache-Enabled": "true", }, }, });const res = await model.invoke("What is a helicone?");
[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching)
How to use Helicone custom properties[β](#how-to-use-helicone-custom-properties "Direct link to How to use Helicone custom properties")
---------------------------------------------------------------------------------------------------------------------------------------
const model = new OpenAI( {}, { basePath: "https://oai.hconeai.com/v1", baseOptions: { headers: { "Helicone-Property-Session": "24", "Helicone-Property-Conversation": "support_issue_2", "Helicone-Property-App": "mobile", }, }, });const res = await model.invoke("What is a helicone?");
[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Databerry
](/v0.1/docs/ecosystem/integrations/databerry/)[
Next
Lunary
](/v0.1/docs/ecosystem/integrations/lunary/)
* [What is Helicone?](#what-is-helicone)
* [Quick start](#quick-start)
* [How to enable Helicone caching](#how-to-enable-helicone-caching)
* [How to use Helicone custom properties](#how-to-use-helicone-custom-properties)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/ecosystem/integrations/lunary/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* [Databerry](/v0.1/docs/ecosystem/integrations/databerry/)
* [Helicone](/v0.1/docs/ecosystem/integrations/helicone/)
* [Lunary](/v0.1/docs/ecosystem/integrations/lunary/)
* [Google MakerSuite](/v0.1/docs/ecosystem/integrations/makersuite/)
* [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/)
* [Integrating with LangServe](/v0.1/docs/ecosystem/langserve/)
* [LangSmith](https://docs.smith.langchain.com)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* Lunary
On this page
Lunary
======
This page covers how to use [Lunary](https://lunary.ai?utm_source=langchain&utm_medium=js&utm_campaign=docs) with LangChain.
What is Lunary?[β](#what-is-lunary "Direct link to What is Lunary?")
--------------------------------------------------------------------
Lunary is an [open-source](https://github.com/lunary-ai/lunary) platform that provides observability (tracing, analytics, feedback tracking), prompt templates management and evaluation for AI apps.
Installation[β](#installation "Direct link to Installation")
------------------------------------------------------------
Start by installing the Lunary package in your project:
* npm
* Yarn
* pnpm
npm install lunary
yarn add lunary
pnpm add lunary
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Create an account on [lunary.ai](https://lunary.ai?utm_source=langchain&utm_medium=js&utm_campaign=docs). Then, create an App and copy the associated `tracking id`.
Once you have it, set it as an environment variable in your `.env`:
LUNARY_APP_ID="..."# Optional if you're self hosting:# LUNARY_API_URL="..."
If you prefer not to use environment variables, you can set your app ID explictly like this:
import { LunaryHandler } from "@langchain/community/callbacks/handlers/lunary";const handler = new LunaryHandler({ appId: "app ID", // verbose: true, // apiUrl: 'custom self hosting url'});
#### API Reference:
You can now use the callback handler with LLM calls, chains and agents.
Quick Start[β](#quick-start "Direct link to Quick Start")
---------------------------------------------------------
import { LunaryHandler } from "@langchain/community/callbacks/handlers/lunary";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ callbacks: [new LunaryHandler()],});
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
LangChain Agent Tracing[β](#langchain-agent-tracing "Direct link to LangChain Agent Tracing")
---------------------------------------------------------------------------------------------
When tracing chains or agents, make sure to include the callback at the run level so that all sub LLM calls & chain runs are reported as well.
import { LunaryHandler } from "@langchain/community/callbacks/handlers/lunary";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "@langchain/openai";import { Calculator } from "@langchain/community/tools/calculator";const tools = [new Calculator()];const chat = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, callbacks: [new LunaryHandler()],});const executor = await initializeAgentExecutorWithOptions(tools, chat, { agentType: "openai-functions",});const result = await executor.run( "What is the approximate result of 78 to the power of 5?", { callbacks: [new LunaryHandler()], metadata: { agentName: "SuperCalculator" }, });
#### API Reference:
* [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
Tracking users[β](#tracking-users "Direct link to Tracking users")
------------------------------------------------------------------
You can track users by adding `userId` and `userProps` to the metadata of your calls:
import { LunaryHandler } from "@langchain/community/callbacks/handlers/lunary";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { ChatOpenAI } from "@langchain/openai";import { Calculator } from "@langchain/community/tools/calculator";const tools = [new Calculator()];const chat = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, callbacks: [new LunaryHandler()],});const executor = await initializeAgentExecutorWithOptions(tools, chat, { agentType: "openai-functions",});const result = await executor.run( "What is the approximate result of 78 to the power of 5?", { callbacks: [new LunaryHandler()], metadata: { agentName: "SuperCalculator", userId: "user123", userProps: { name: "John Doe", email: "email@example.org", }, }, });
#### API Reference:
* [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
Tagging calls[β](#tagging-calls "Direct link to Tagging calls")
---------------------------------------------------------------
You can tag calls with `tags`:
import { LunaryHandler } from "@langchain/community/callbacks/handlers/lunary";import { ChatOpenAI } from "@langchain/openai";const chat = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0, callbacks: [new LunaryHandler()],});await chat.invoke("Hello", { tags: ["greeting"],});
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
Usage with custom agents[β](#usage-with-custom-agents "Direct link to Usage with custom agents")
------------------------------------------------------------------------------------------------
You can use the callback handler combined with the `lunary` module to track custom agents that partially use LangChain:
import { LunaryHandler } from "@langchain/community/callbacks/handlers/lunary";import { ChatOpenAI } from "@langchain/openai";import { HumanMessage, SystemMessage } from "@langchain/core/messages";import lunary from "lunary";const chat = new ChatOpenAI({ model: "gpt-4", callbacks: [new LunaryHandler()],});async function TranslatorAgent(query: string) { const res = await chat.invoke([ new SystemMessage( "You are a translator agent that hides jokes in each translation." ), new HumanMessage( `Translate this sentence from English to French: ${query}` ), ]); return res.content;}// By wrapping the agent with wrapAgent, we automatically track all input, outputs and errors// And tools and logs will be tied to the correct agentconst translate = lunary.wrapAgent(TranslatorAgent);// You can use .identify() on wrapped methods to track usersconst res = await translate("Good morning").identify("user123");console.log(res);
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* [SystemMessage](https://api.js.langchain.com/classes/langchain_core_messages.SystemMessage.html) from `@langchain/core/messages`
Full documentation[β](#full-documentation "Direct link to Full documentation")
------------------------------------------------------------------------------
You can find the full documentation of the Lunary LangChain integration [here](https://lunary.ai/docs/langchain?utm_source=langchain&utm_medium=js&utm_campaign=docs).
Support[β](#support "Direct link to Support")
---------------------------------------------
For any question or issue with integration you can reach out to the Lunary team via [email](mailto:vince@lunary.ai) or livechat on the website.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Helicone
](/v0.1/docs/ecosystem/integrations/helicone/)[
Next
Google MakerSuite
](/v0.1/docs/ecosystem/integrations/makersuite/)
* [What is Lunary?](#what-is-lunary)
* [Installation](#installation)
* [Setup](#setup)
* [Quick Start](#quick-start)
* [LangChain Agent Tracing](#langchain-agent-tracing)
* [Tracking users](#tracking-users)
* [Tagging calls](#tagging-calls)
* [Usage with custom agents](#usage-with-custom-agents)
* [Full documentation](#full-documentation)
* [Support](#support)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/ecosystem/integrations/unstructured/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* [Databerry](/v0.1/docs/ecosystem/integrations/databerry/)
* [Helicone](/v0.1/docs/ecosystem/integrations/helicone/)
* [Lunary](/v0.1/docs/ecosystem/integrations/lunary/)
* [Google MakerSuite](/v0.1/docs/ecosystem/integrations/makersuite/)
* [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/)
* [Integrating with LangServe](/v0.1/docs/ecosystem/langserve/)
* [LangSmith](https://docs.smith.langchain.com)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* Unstructured
On this page
Unstructured
============
This page covers how to use [Unstructured](https://unstructured.io) within LangChain.
What is Unstructured?[β](#what-is-unstructured "Direct link to What is Unstructured?")
--------------------------------------------------------------------------------------
Unstructured is an [open source](https://github.com/Unstructured-IO/unstructured) Python package for extracting text from raw documents for use in machine learning applications. Currently, Unstructured supports partitioning Word documents (in `.doc` or `.docx` format), PowerPoints (in `.ppt` or `.pptx` format), PDFs, HTML files, images, emails (in `.eml` or `.msg` format), epubs, markdown, and plain text files.
`unstructured` is a Python package and cannot be used directly with TS/JS, however Unstructured also maintains a [REST API](https://github.com/Unstructured-IO/unstructured-api) to support pre-processing pipelines written in other programming languages. The endpoint for the hosted Unstructured API is `https://api.unstructured.io/general/v0/general`, or you can run the service locally using the instructions found [here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image).
Check out the [Unstructured documentation page](https://unstructured-io.github.io/unstructured/) for instructions on how to obtain an API key.
Quick start[β](#quick-start "Direct link to Quick start")
---------------------------------------------------------
You can use Unstructured in `langchain` with the following code. Replace the filename with the file you would like to process. If you are running the container locally, switch the url to `http://127.0.0.1:8000/general/v0/general`. Check out the [API documentation page](https://api.unstructured.io/general/docs) for additional details.
import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredLoader.html) from `langchain/document_loaders/fs/unstructured`
Directories[β](#directories "Direct link to Directories")
---------------------------------------------------------
You can also load all of the files in the directory using `UnstructuredDirectoryLoader`, which inherits from [`DirectoryLoader`](/v0.1/docs/integrations/document_loaders/file_loaders/directory/):
import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredDirectoryLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html) from `langchain/document_loaders/fs/unstructured`
Currently, the `UnstructuredLoader` supports the following document types:
* Plain text files (`.txt`/`.text`)
* PDFs (`.pdf`)
* Word Documents (`.doc`/`.docx`)
* PowerPoints (`.ppt`/`.pptx`)
* Images (`.jpg`/`.jpeg`)
* Emails (`.eml`/`.msg`)
* HTML (`.html`)
* Markdown Files (`.md`)
The output from the `UnstructuredLoader` will be an array of `Document` objects that looks like the following:
[ Document { pageContent: `Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, wh ich performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self -attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predic tions for position i can depend only on the known outputs at positions less than i.`, metadata: { page_number: 3, filename: '1706.03762.pdf', category: 'NarrativeText' } }, Document { pageContent: '3.2 Attention', metadata: { page_number: 3, filename: '1706.03762.pdf', category: 'Title' }]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Google MakerSuite
](/v0.1/docs/ecosystem/integrations/makersuite/)[
Next
Integrating with LangServe
](/v0.1/docs/ecosystem/langserve/)
* [What is Unstructured?](#what-is-unstructured)
* [Quick start](#quick-start)
* [Directories](#directories)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/ecosystem/integrations/makersuite/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* [Databerry](/v0.1/docs/ecosystem/integrations/databerry/)
* [Helicone](/v0.1/docs/ecosystem/integrations/helicone/)
* [Lunary](/v0.1/docs/ecosystem/integrations/lunary/)
* [Google MakerSuite](/v0.1/docs/ecosystem/integrations/makersuite/)
* [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/)
* [Integrating with LangServe](/v0.1/docs/ecosystem/langserve/)
* [LangSmith](https://docs.smith.langchain.com)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [Integrations](/v0.1/docs/ecosystem/integrations/)
* Google MakerSuite
On this page
Google MakerSuite
=================
Google's [MakerSuite](https://makersuite.google.com/) is a web-based playground for creating and saving chat, text, and "data" prompts that work with the Google PaLM API and model. These prompts include the text, which may include "test input" that act as template parameters, and parameters for the model including the model name, temperature, etc.
LangChain.js provides the `MakerSuiteHub` class which lets you pull this prompt from Google Drive, where they are saved. Once pulled, you can convert the prompt into a LangChain Template, an LLM Model, or a chain that combines the two. This hub class has a simple time-based in-memory cache of prompts, so it is not always accessing the prompt saved in Google Drive.
Using MakerSuite in this way allows you to treat it as a simple Content Management System (CMS) of sorts or allows separation of tasks between prompt authors and other developers.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You do not need any additional packages beyond those that are required for either the Google PaLM [text](/v0.1/docs/integrations/llms/google_palm/) or [chat](/v0.1/docs/integrations/chat/google_palm/) model:
* npm
* Yarn
* pnpm
npm install google-auth-library @google-ai/generativelanguage
yarn add google-auth-library @google-ai/generativelanguage
pnpm add google-auth-library @google-ai/generativelanguage
Credentials and Authorization[β](#credentials-and-authorization "Direct link to Credentials and Authorization")
---------------------------------------------------------------------------------------------------------------
You will need two sets of credentials:
* An API Key to access the PaLM API.
Create this at [Google MakerSuite](https://makersuite.google.com/app/apikey). Then set the key as `GOOGLE_PALM_API_KEY` environment variable.
* Credentials for a service account that has been permitted access to the Google Drive APIs.
These credentials may be used in one of three ways:
* You are logged into an account (using `gcloud auth application-default login`) permitted to that project.
* You are running on a machine using a service account that is permitted to the project.
* You have downloaded the credentials for a service account that is permitted to the project and set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of this file.
This service account should also be permitted to the MakerSuite folder in Google Drive or to the specific prompt file itself. Even if the prompt file is permitted for anyone to read - you will still need a service account that is permitted to access Google Drive.
The Prompt File ID[β](#the-prompt-file-id "Direct link to The Prompt File ID")
------------------------------------------------------------------------------
The easiest way to get the ID of the prompt file is to open it in MakerSuite and examine the URL. The URL should look something like:
https://makersuite.google.com/app/prompts/1gxLasQIeQdwR4wxtV_nb93b_g9f0GaMm
The final portion of this, `1gxLasQIeQdwR4wxtV_nb93b_g9f0GaMm` is the ID.
We will be using this in our examples below. This prompt contains a Template that is equivalent to:
What would be a good name for a company that makes {product}
With model parameters set that include:
* Model name: Text Bison
* Temperature: 0.7
* Max outputs: 1
* Standard safety settings
Use[β](#use "Direct link to Use")
---------------------------------
The most typical way to use the hub consists of two parts:
1. Creating the `MakerSuiteHub` class once.
2. Pulling the prompt, getting the chain, and providing values for the template to get the result.
// Create the hub classimport { MakerSuiteHub } from "langchain/experimental/hubs/makersuite/googlemakersuitehub";const hub = new MakerSuiteHub();// Pull the prompt, get the chain, and invoke it with the template valuesconst prompt = await hub.pull("1gxLasQIeQdwR4wxtV_nb93b_g9f0GaMm");const result = await prompt.toChain().invoke({ product: "socks" });console.log("text chain result", result);
### Configuring the hub[β](#configuring-the-hub "Direct link to Configuring the hub")
Since the hub implements a basic in-memory time-based cache, you can configure how long until a prompt that is saved in the cache will be reloaded.
This value defaults to 0, indicating it will always be loaded from Google Drive, or you can set it to the number of milliseconds it will be valid in the cache:
const hub = new MakerSuiteHub({ cacheTimeout: 3600000, // One hour});
### Getting the Template or Model[β](#getting-the-template-or-model "Direct link to Getting the Template or Model")
In some cases, you may need to get just the template or just the model that is represented by the prompt.
const template = prompt.toTemplate();const textModel = prompt.toModel() as GooglePaLM;const chatModel = prompt.toModel() as ChatGooglePaLM;
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Lunary
](/v0.1/docs/ecosystem/integrations/lunary/)[
Next
Unstructured
](/v0.1/docs/ecosystem/integrations/unstructured/)
* [Setup](#setup)
* [Credentials and Authorization](#credentials-and-authorization)
* [The Prompt File ID](#the-prompt-file-id)
* [Use](#use)
* [Configuring the hub](#configuring-the-hub)
* [Getting the Template or Model](#getting-the-template-or-model)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/quick_start/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Quick Start
On this page
Quick Start
===========
Chat models are a variation on language models. While chat models use language models under the hood, the interface they use is a bit different. Rather than using a "text in, text out" API, they use an interface where "chat messages" are the inputs and outputs.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
* OpenAI
* Local (using Ollama)
* Anthropic
* Google GenAI
First we'll need to install the LangChain OpenAI integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable:
OPENAI_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the OpenAI Chat Model class:
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ apiKey: "...",});
Otherwise you can initialize without any params:
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI();
[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.
First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:
* [Download](https://ollama.ai/download)
* Fetch a model via e.g. `ollama pull mistral`
Then, make sure the Ollama server is running. Next, you'll need to install the LangChain community package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
And then you can do:
import { ChatOllama } from "@langchain/community/chat_models/ollama";const chatModel = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "mistral",});
First we'll need to install the LangChain Anthropic integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Accessing the API requires an API key, which you can get by creating an account [here](https://console.anthropic.com/). Once we have a key we'll want to set it as an environment variable:
ANTHROPIC_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the `ChatAnthropic` class:
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ apiKey: "...",});
Otherwise you can initialize without any params:
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic();
First we'll need to install the LangChain Google GenAI integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/google-genai
yarn add @langchain/google-genai
pnpm add @langchain/google-genai
Accessing the API requires an API key, which you can get by creating an account [here](https://ai.google.dev/tutorials/setup). Once we have a key we'll want to set it as an environment variable:
GOOGLE_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the `ChatGoogleGenerativeAI` class:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";const chatModel = new ChatGoogleGenerativeAI({ apiKey: "...",});
Otherwise you can initialize without any params:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";const chatModel = new ChatGoogleGenerativeAI();
Messages[β](#messages "Direct link to Messages")
------------------------------------------------
The chat model interface is based around messages rather than raw text. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, `FunctionMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`
LCEL[β](#lcel "Direct link to LCEL")
------------------------------------
Chat models implement the [Runnable interface](/v0.1/docs/expression_language/interface/), the basic building block of the [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/). This means they support `invoke`, `stream`, `batch`, and `streamLog` calls.
Chat models accept `BaseMessage[]` as inputs, or objects which can be coerced to messages, including `string` (converted to `HumanMessage`) and `PromptValue`.
import { HumanMessage, SystemMessage } from "@langchain/core/messages";const messages = [ new SystemMessage("You're a helpful assistant"), new HumanMessage("What is the purpose of model regularization?"),];
await chatModel.invoke(messages);
AIMessage { content: 'The purpose of model regularization is to prevent overfitting in machine learning models. Overfitting occurs when a model becomes too complex and starts to fit the noise in the training data, leading to poor generalization on unseen data. Regularization techniques introduce additional constraints or penalties to the model's objective function, discouraging it from becoming overly complex and promoting simpler and more generalizable models. Regularization helps to strike a balance between fitting the training data well and avoiding overfitting, leading to better performance on new, unseen data.' }
See the [Runnable interface](/v0.1/docs/expression_language/interface/) for more details on the available methods.
[LangSmith](https://docs.smith.langchain.com/)[β](#langsmith "Direct link to langsmith")
----------------------------------------------------------------------------------------
All `ChatModel`s come with built-in LangSmith tracing. Just set the following environment variables:
export LANGCHAIN_TRACING_V2="true"export LANGCHAIN_API_KEY=<your-api-key>
and any `ChatModel` invocation (whether it's nested in a chain or not) will automatically be traced. A trace will include inputs, outputs, latency, token usage, invocation params, environment params, and more. See an example here: [https://smith.langchain.com/public/a54192ae-dd5c-4f7a-88d1-daa1eaba1af7/r](https://smith.langchain.com/public/a54192ae-dd5c-4f7a-88d1-daa1eaba1af7/r).
In LangSmith you can then provide feedback for any trace, compile annotated datasets for evals, debug performance in the playground, and more.
\[Legacy\] `generate`[β](#legacy-generate "Direct link to legacy-generate")
---------------------------------------------------------------------------
### Batch calls, richer outputs[β](#batch-calls-richer-outputs "Direct link to Batch calls, richer outputs")
You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter.
const response3 = await chatModel.generate([ [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love programming." ), ], [ new SystemMessage( "You are a helpful assistant that translates English to French." ), new HumanMessage( "Translate this sentence from English to French. I love artificial intelligence." ), ],]);console.log(response3);/* { generations: [ [ { text: "J'aime programmer.", message: AIMessage { text: "J'aime programmer." }, } ], [ { text: "J'aime l'intelligence artificielle.", message: AIMessage { text: "J'aime l'intelligence artificielle." } } ] ] }*/
You can recover things like token usage from this LLMResult:
console.log(response3.llmOutput);/* { tokenUsage: { completionTokens: 20, promptTokens: 69, totalTokens: 89 } }*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Chat Models
](/v0.1/docs/modules/model_io/chat/)[
Next
Streaming
](/v0.1/docs/modules/model_io/chat/streaming/)
* [Setup](#setup)
* [Messages](#messages)
* [LCEL](#lcel)
* [LangSmith](#langsmith)
* [Legacy `generate`](#legacy-generate)
* [Batch calls, richer outputs](#batch-calls-richer-outputs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/streaming/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Streaming
On this page
Streaming
=========
Some Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
Using `.stream()`[β](#using-stream "Direct link to using-stream")
-----------------------------------------------------------------
The easiest way to stream is to use the `.stream()` method. This returns an readable stream that you can also iterate over:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatOpenAI } from "@langchain/openai";const chat = new ChatOpenAI({ maxTokens: 25,});// Pass in a human message. Also accepts a raw string, which is automatically// inferred to be a human message.const stream = await chat.stream([["human", "Tell me a joke about bears."]]);for await (const chunk of stream) { console.log(chunk);}/*AIMessageChunk { content: '', additional_kwargs: {}}AIMessageChunk { content: 'Why', additional_kwargs: {}}AIMessageChunk { content: ' did', additional_kwargs: {}}AIMessageChunk { content: ' the', additional_kwargs: {}}AIMessageChunk { content: ' bear', additional_kwargs: {}}AIMessageChunk { content: ' bring', additional_kwargs: {}}AIMessageChunk { content: ' a', additional_kwargs: {}}...*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
For models that do not support streaming, the entire response will be returned as a single chunk.
For convenience, you can also pipe a chat model into a [StringOutputParser](/v0.1/docs/modules/model_io/output_parsers/types/string/) to extract just the raw string values from each chunk:
import { ChatOpenAI } from "@langchain/openai";import { StringOutputParser } from "@langchain/core/output_parsers";const parser = new StringOutputParser();const model = new ChatOpenAI({ temperature: 0 });const stream = await model.pipe(parser).stream("Hello there!");for await (const chunk of stream) { console.log(chunk);}/* Hello ! How can I assist you today ?*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
You can also do something similar to stream bytes directly (e.g. for returning a stream in an HTTP response) using the [HttpResponseOutputParser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/):
import { ChatOpenAI } from "@langchain/openai";import { HttpResponseOutputParser } from "langchain/output_parsers";const handler = async () => { const parser = new HttpResponseOutputParser(); const model = new ChatOpenAI({ temperature: 0 }); const stream = await model.pipe(parser).stream("Hello there!"); const httpResponse = new Response(stream, { headers: { "Content-Type": "text/plain; charset=utf-8", }, }); return httpResponse;};await handler();
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [HttpResponseOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html) from `langchain/output_parsers`
Using a callback handler[β](#using-a-callback-handler "Direct link to Using a callback handler")
------------------------------------------------------------------------------------------------
You can also use a [`CallbackHandler`](https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/callbacks/base.ts) like so:
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const chat = new ChatOpenAI({ maxTokens: 25, streaming: true,});const response = await chat.invoke([new HumanMessage("Tell me a joke.")], { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);// { token: '' }// { token: '\n\n' }// { token: 'Why' }// { token: ' don' }// { token: "'t" }// { token: ' scientists' }// { token: ' trust' }// { token: ' atoms' }// { token: '?\n\n' }// { token: 'Because' }// { token: ' they' }// { token: ' make' }// { token: ' up' }// { token: ' everything' }// { token: '.' }// { token: '' }// AIMessage {// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything."// }
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quick Start
](/v0.1/docs/modules/model_io/chat/quick_start/)[
Next
Caching
](/v0.1/docs/modules/model_io/chat/caching/)
* [Using `.stream()`](#using-stream)
* [Using a callback handler](#using-a-callback-handler)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/caching/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Caching
On this page
Caching
=======
LangChain provides an optional caching layer for chat models. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatOpenAI } from "@langchain/openai";// To make the caching really obvious, lets use a slower model.const model = new ChatOpenAI({ model: "gpt-4", cache: true,});
In Memory Cache[β](#in-memory-cache "Direct link to In Memory Cache")
---------------------------------------------------------------------
The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.
console.time();// The first time, it is not yet in cache, so it should take longerconst res = await model.invoke("Tell me a joke!");console.log(res);console.timeEnd();/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ 'langchain_core', 'messages' ], content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } } default: 2.224s*/
console.time();// The second time it is, so it goes fasterconst res2 = await model.invoke("Tell me a joke!");console.log(res2);console.timeEnd();/* AIMessage { lc_serializable: true, lc_kwargs: { content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ 'langchain_core', 'messages' ], content: "Why don't scientists trust atoms?\n\nBecause they make up everything!", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined } } default: 181.98ms*/
Caching with Momento[β](#caching-with-momento "Direct link to Caching with Momento")
------------------------------------------------------------------------------------
LangChain also provides a Momento-based cache. [Momento](https://gomomento.com) is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the `@gomomento/sdk` package:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk
yarn add @gomomento/sdk
pnpm add @gomomento/sdk
Next you'll need to sign up and create an API key. Once you've done that, pass a `cache` option when you instantiate the LLM like this:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { MomentoCache } from "@langchain/community/caches/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new ChatOpenAI({ cache });
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [MomentoCache](https://api.js.langchain.com/classes/langchain_community_caches_momento.MomentoCache.html) from `@langchain/community/caches/momento`
Caching with Redis[β](#caching-with-redis "Direct link to Caching with Redis")
------------------------------------------------------------------------------
LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package:
* npm
* Yarn
* pnpm
npm install ioredis
yarn add ioredis
pnpm add ioredis
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { ChatOpenAI } from "@langchain/openai";import { Redis } from "ioredis";import { RedisCache } from "@langchain/community/caches/ioredis";const client = new Redis("redis://localhost:6379");const cache = new RedisCache(client, { ttl: 60, // Optional key expiration value});const model = new ChatOpenAI({ cache });const response1 = await model.invoke("Do something random!");console.log(response1);/* AIMessage { content: "Sure! I'll generate a random number for you: 37", additional_kwargs: {} }*/const response2 = await model.invoke("Do something random!");console.log(response2);/* AIMessage { content: "Sure! I'll generate a random number for you: 37", additional_kwargs: {} }*/await client.disconnect();
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [RedisCache](https://api.js.langchain.com/classes/langchain_community_caches_ioredis.RedisCache.html) from `@langchain/community/caches/ioredis`
Caching with Upstash Redis[β](#caching-with-upstash-redis "Direct link to Caching with Upstash Redis")
------------------------------------------------------------------------------------------------------
LangChain provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the `@upstash/redis` package:
* npm
* Yarn
* pnpm
npm install @upstash/redis
yarn add @upstash/redis
pnpm add @upstash/redis
You'll also need an [Upstash account](https://docs.upstash.com/redis#create-account) and a [Redis database](https://docs.upstash.com/redis#create-a-database) to connect to. Once you've done that, retrieve your REST URL and REST token.
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { ChatOpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new ChatOpenAI({ cache });
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [UpstashRedisCache](https://api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis`
You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance:
import { Redis } from "@upstash/redis";import https from "https";import { ChatOpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new ChatOpenAI({ cache });
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [UpstashRedisCache](https://api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis`
Caching with Cloudflare KV[β](#caching-with-cloudflare-kv "Direct link to Caching with Cloudflare KV")
------------------------------------------------------------------------------------------------------
info
This integration is only supported in Cloudflare Workers.
If you're deploying your project as a Cloudflare Worker, you can use LangChain's Cloudflare KV-powered LLM cache.
For information on how to set up KV in Cloudflare, see [the official documentation](https://developers.cloudflare.com/kv/).
**Note:** If you are using TypeScript, you may need to install types if they aren't already present:
* npm
* Yarn
* pnpm
npm install -S @cloudflare/workers-types
yarn add @cloudflare/workers-types
pnpm add @cloudflare/workers-types
import type { KVNamespace } from "@cloudflare/workers-types";import { ChatOpenAI } from "@langchain/openai";import { CloudflareKVCache } from "@langchain/cloudflare";export interface Env { KV_NAMESPACE: KVNamespace; OPENAI_API_KEY: string;}export default { async fetch(_request: Request, env: Env) { try { const cache = new CloudflareKVCache(env.KV_NAMESPACE); const model = new ChatOpenAI({ cache, model: "gpt-3.5-turbo", apiKey: env.OPENAI_API_KEY, }); const response = await model.invoke("How are you today?"); return new Response(JSON.stringify(response), { headers: { "content-type": "application/json" }, }); } catch (err: any) { console.log(err.message); return new Response(err.message, { status: 500 }); } },};
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [CloudflareKVCache](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareKVCache.html) from `@langchain/cloudflare`
Caching on the File System[β](#caching-on-the-file-system "Direct link to Caching on the File System")
------------------------------------------------------------------------------------------------------
danger
This cache is not recommended for production use. It is only intended for local development.
LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want.
const cache = await LocalFileCache.create();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Streaming
](/v0.1/docs/modules/model_io/chat/streaming/)[
Next
Custom chat models
](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [In Memory Cache](#in-memory-cache)
* [Caching with Momento](#caching-with-momento)
* [Caching with Redis](#caching-with-redis)
* [Caching with Upstash Redis](#caching-with-upstash-redis)
* [Caching with Cloudflare KV](#caching-with-cloudflare-kv)
* [Caching on the File System](#caching-on-the-file-system)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/custom_chat/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Custom chat models
On this page
Custom chat models
==================
This notebook goes over how to create a custom chat model wrapper, in case you want to use your own chat model or a different wrapper than one that is directly supported in LangChain.
There are a few required things that a chat model needs to implement after extending the [`SimpleChatModel` class](https://api.js.langchain.com/classes/langchain_core_language_models_chat_models.SimpleChatModel.html):
* A `_call` method that takes in a list of messages and call options (which includes things like `stop` sequences), and returns a string.
* A `_llmType` method that returns a string. Used for logging purposes only.
You can also implement the following optional method:
* A `_streamResponseChunks` method that returns an `AsyncGenerator` and yields [`ChatGenerationChunks`](https://api.js.langchain.com/classes/langchain_core_outputs.ChatGenerationChunk.html). This allows the LLM to support streaming outputs.
Let's implement a very simple custom chat model that just echoes back the first `n` characters of the input.
import { SimpleChatModel, type BaseChatModelParams,} from "@langchain/core/language_models/chat_models";import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";import { AIMessageChunk, type BaseMessage } from "@langchain/core/messages";import { ChatGenerationChunk } from "@langchain/core/outputs";export interface CustomChatModelInput extends BaseChatModelParams { n: number;}export class CustomChatModel extends SimpleChatModel { n: number; constructor(fields: CustomChatModelInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<string> { if (!messages.length) { throw new Error("No messages provided."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } return messages[0].content.slice(0, this.n); } async *_streamResponseChunks( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<ChatGenerationChunk> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of messages[0].content.slice(0, this.n)) { yield new ChatGenerationChunk({ message: new AIMessageChunk({ content: letter, }), text: letter, }); // Trigger the appropriate callback for new chunks await runManager?.handleLLMNewToken(letter); } }}
We can now use this as any other chat model:
const chatModel = new CustomChatModel({ n: 4 });await chatModel.invoke([["human", "I am an LLM"]]);
AIMessage { content: 'I am', additional_kwargs: {}}
And support streaming:
const stream = await chatModel.stream([["human", "I am an LLM"]]);for await (const chunk of stream) { console.log(chunk);}
AIMessageChunk { content: 'I', additional_kwargs: {}}AIMessageChunk { content: ' ', additional_kwargs: {}}AIMessageChunk { content: 'a', additional_kwargs: {}}AIMessageChunk { content: 'm', additional_kwargs: {}}
Richer outputs[β](#richer-outputs "Direct link to Richer outputs")
------------------------------------------------------------------
If you want to take advantage of LangChain's callback system for functionality like token tracking, you can extend the [`BaseChatModel`](https://api.js.langchain.com/classes/langchain_core_language_models_chat_models.BaseChatModel.html) class and implement the lower level `_generate` method. It also takes a list of `BaseMessage`s as input, but requires you to construct and return a `ChatGeneration` object that permits additional metadata. Here's an example:
import { AIMessage, BaseMessage } from "@langchain/core/messages";import { ChatResult } from "@langchain/core/outputs";import { BaseChatModel, BaseChatModelCallOptions, BaseChatModelParams,} from "@langchain/core/language_models/chat_models";import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";export interface AdvancedCustomChatModelOptions extends BaseChatModelCallOptions {}export interface AdvancedCustomChatModelParams extends BaseChatModelParams { n: number;}export class AdvancedCustomChatModel extends BaseChatModel<AdvancedCustomChatModelOptions> { n: number; static lc_name(): string { return "AdvancedCustomChatModel"; } constructor(fields: AdvancedCustomChatModelParams) { super(fields); this.n = fields.n; } async _generate( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<ChatResult> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); const content = messages[0].content.slice(0, this.n); const tokenUsage = { usedTokens: this.n, }; return { generations: [{ message: new AIMessage({ content }), text: content }], llmOutput: { tokenUsage }, }; } _llmType(): string { return "advanced_custom_chat_model"; }}
This will pass the additional returned information in callback events and in the \`streamEvents method:
const chatModel = new AdvancedCustomChatModel({ n: 4 });const eventStream = await chatModel.streamEvents([["human", "I am an LLM"]], { version: "v1",});for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); }}
{ "event": "on_llm_end", "name": "AdvancedCustomChatModel", "run_id": "b500b98d-bee5-4805-9b92-532a491f5c70", "tags": [], "metadata": {}, "data": { "output": { "generations": [ [ { "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "I am", "additional_kwargs": {} } }, "text": "I am" } ] ], "llmOutput": { "tokenUsage": { "usedTokens": 4 } } } }}
Tracing (advanced)[β](#tracing-advanced "Direct link to Tracing (advanced)")
----------------------------------------------------------------------------
If you are implementing a custom chat model and want to use it with a tracing service like [LangSmith](https://smith.langchain.com/), you can automatically log params used for a given invocation by implementing the `invocationParams()` method on the model.
This method is purely optional, but anything it returns will be logged as metadata for the trace.
Here's one pattern you might use:
export interface CustomChatModelOptions extends BaseChatModelCallOptions { // Some required or optional inner args tools: Record<string, any>[];}export interface CustomChatModelParams extends BaseChatModelParams { temperature: number;}export class CustomChatModel extends BaseChatModel<CustomChatModelOptions> { temperature: number; static lc_name(): string { return "CustomChatModel"; } constructor(fields: CustomChatModelParams) { super(fields); this.temperature = fields.temperature; } // Anything returned in this method will be logged as metadata in the trace. // It is common to pass it any options used to invoke the function. invocationParams(options?: this["ParsedCallOptions"]) { return { tools: options?.tools, n: this.n, }; } async _generate( messages: BaseMessage[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<ChatResult> { if (!messages.length) { throw new Error("No messages provided."); } if (typeof messages[0].content !== "string") { throw new Error("Multimodal messages are not supported."); } const additionalParams = this.invocationParams(options); const content = await someAPIRequest(messages, additionalParams); return { generations: [{ message: new AIMessage({ content }), text: content }], llmOutput: {}, }; } _llmType(): string { return "advanced_custom_chat_model"; }}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Caching
](/v0.1/docs/modules/model_io/chat/caching/)[
Next
Tracking token usage
](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Richer outputs](#richer-outputs)
* [Tracing (advanced)](#tracing-advanced)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/token_usage_tracking/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Tracking token usage
On this page
Tracking token usage
====================
This notebook goes over how to track your token usage for specific calls.
Using AIMessage.response\_metadata[β](#using-aimessageresponse_metadata "Direct link to Using AIMessage.response_metadata")
---------------------------------------------------------------------------------------------------------------------------
A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [AIMessage.response\_metadata](/v0.1/docs/modules/model_io/chat/response_metadata/) field. Here's an example with OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { tokenUsage: { completionTokens: 15, promptTokens: 12, totalTokens: 27 }, finish_reason: 'stop' }*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
And here's an example with Anthropic:
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ model: "claude-3-sonnet-20240229",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { id: 'msg_017Mgz6HdgNbi3cwL1LNB9Dw', model: 'claude-3-sonnet-20240229', stop_sequence: null, usage: { input_tokens: 12, output_tokens: 30 }, stop_reason: 'end_turn' }*/
#### API Reference:
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Using callbacks[β](#using-callbacks "Direct link to Using callbacks")
---------------------------------------------------------------------
You can also use the `handleLLMEnd` callback to get the full output from the LLM, including token usage for supported models. Here's an example of how you could do that:
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo", callbacks: [ { handleLLMEnd(output) { console.log(JSON.stringify(output, null, 2)); }, }, ],});await chatModel.invoke("Tell me a joke.");/* { "generations": [ [ { "text": "Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!", "message": { "lc": 1, "type": "constructor", "id": [ "langchain_core", "messages", "AIMessage" ], "kwargs": { "content": "Why did the scarecrow win an award?\n\nBecause he was outstanding in his field!", "tool_calls": [], "invalid_tool_calls": [], "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 17, "promptTokens": 12, "totalTokens": 29 }, "finish_reason": "stop" } } }, "generationInfo": { "finish_reason": "stop" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 17, "promptTokens": 12, "totalTokens": 29 } } }*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Custom chat models
](/v0.1/docs/modules/model_io/chat/custom_chat/)[
Next
Cancelling requests
](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Using AIMessage.response\_metadata](#using-aimessageresponse_metadata)
* [Using callbacks](#using-callbacks)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/cancelling_requests/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Cancelling requests
Cancelling requests
===================
You can cancel a request by passing a `signal` option when you call the model. For example, for OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const model = new ChatOpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.invoke( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Note, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Tracking token usage
](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)[
Next
Dealing with API Errors
](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Dealing with API Errors
Dealing with API Errors
=======================
If the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a `maxRetries` option when you instantiate the model. For example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ maxRetries: 10 });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cancelling requests
](/v0.1/docs/modules/model_io/chat/cancelling_requests/)[
Next
Dealing with rate limits
](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Dealing with rate limits
Dealing with rate limits
========================
Some providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a `maxConcurrency` option when instantiating a Chat Model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.
For example, if you set `maxConcurrency: 5`, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.
To use this feature, simply pass `maxConcurrency: <number>` when you instantiate the LLM. For example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ maxConcurrency: 5 });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Dealing with API Errors
](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)[
Next
Tool/function calling
](/v0.1/docs/modules/model_io/chat/function_calling/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/response_metadata/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Response metadata
On this page
Response metadata
=================
Many model providers include some metadata in their chat generation responses. This metadata can be accessed via the `AIMessage.response_metadata` attribute. Depending on the model provider and model configuration, this can contain information like [token counts](/v0.1/docs/modules/model_io/chat/token_usage_tracking/) and more.
Hereβs what the response metadata looks like for a few different providers:
OpenAI[β](#openai "Direct link to OpenAI")
------------------------------------------
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo" });const message = await chatModel.invoke([ ["human", "What's the oldest known example of cuneiform"],]);console.log(message.response_metadata);
{ tokenUsage: { completionTokens: 164, promptTokens: 17, totalTokens: 181 }, finish_reason: "stop"}
Anthropic[β](#anthropic "Direct link to Anthropic")
---------------------------------------------------
import { ChatAnthropic } from "@langchain/anthropic";const chatModel = new ChatAnthropic({ model: "claude-3-sonnet-20240229" });const message = await chatModel.invoke([ ["human", "What's the oldest known example of cuneiform"],]);console.log(message.response_metadata);
{ id: "msg_01K8kC9wskG6qsSGRmY7b3kj", model: "claude-3-sonnet-20240229", stop_sequence: null, usage: { input_tokens: 17, output_tokens: 355 }, stop_reason: "end_turn"}
Google VertexAI[β](#google-vertexai "Direct link to Google VertexAI")
---------------------------------------------------------------------
import { ChatVertexAI } from "@langchain/google-vertexai-web";const chatModel = new ChatVertexAI({ model: "gemini-pro" });const message = await chatModel.invoke([ ["human", "What's the oldest known example of cuneiform"],]);console.log(message.response_metadata);
{ usage_metadata: { prompt_token_count: undefined, candidates_token_count: undefined, total_token_count: undefined }, safety_ratings: [ { category: "HARM_CATEGORY_HATE_SPEECH", probability: "NEGLIGIBLE", probability_score: 0.027480692, severity: "HARM_SEVERITY_NEGLIGIBLE", severity_score: 0.073430054 }, { category: "HARM_CATEGORY_DANGEROUS_CONTENT", probability: "NEGLIGIBLE", probability_score: 0.055412795, severity: "HARM_SEVERITY_NEGLIGIBLE", severity_score: 0.112405084 }, { category: "HARM_CATEGORY_HARASSMENT", probability: "NEGLIGIBLE", probability_score: 0.055720285, severity: "HARM_SEVERITY_NEGLIGIBLE", severity_score: 0.020844316 }, { category: "HARM_CATEGORY_SEXUALLY_EXPLICIT", probability: "NEGLIGIBLE", probability_score: 0.05223086, severity: "HARM_SEVERITY_NEGLIGIBLE", severity_score: 0.14891148 } ], finish_reason: undefined}
MistralAI[β](#mistralai "Direct link to MistralAI")
---------------------------------------------------
import { ChatMistralAI } from "@langchain/mistralai";const chatModel = new ChatMistralAI({ model: "mistral-tiny" });const message = await chatModel.invoke([ ["human", "What's the oldest known example of cuneiform"],]);console.log(message.response_metadata);
{ tokenUsage: { completionTokens: 166, promptTokens: 19, totalTokens: 185 }, finish_reason: "stop"}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Tool/function calling
](/v0.1/docs/modules/model_io/chat/function_calling/)[
Next
Subscribing to events
](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [OpenAI](#openai)
* [Anthropic](#anthropic)
* [Google VertexAI](#google-vertexai)
* [MistralAI](#mistralai)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/subscribing_events/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Subscribing to events
Subscribing to events
=====================
Especially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a Chat Model processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the Chat Model, you can pass callbacks to the Chat Model for custom logging (or anything else you want to do) as the model goes through the steps:
For more info on the events available see the [Callbacks](/v0.1/docs/modules/callbacks/) section of the docs.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { type LLMResult } from "langchain/schema";import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";import { Serialized } from "@langchain/core/load/serializable";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new ChatOpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.invoke([ new HumanMessage( "What is a good name for a company that makes colorful socks?" ),]);/*{ "name": "openai"}[ "Human: What is a good name for a company that makes colorful socks?"]{ "generations": [ [ { "text": "Rainbow Soles", "message": { "text": "Rainbow Soles" } } ] ], "llmOutput": { "tokenUsage": { "completionTokens": 4, "promptTokens": 21, "totalTokens": 25 } }}*/
#### API Reference:
* LLMResult from `langchain/schema`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* [Serialized](https://api.js.langchain.com/types/langchain_core_load_serializable.Serialized.html) from `@langchain/core/load/serializable`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Response metadata
](/v0.1/docs/modules/model_io/chat/response_metadata/)[
Next
Adding a timeout
](/v0.1/docs/modules/model_io/chat/timeouts/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/chat/timeouts/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Quick Start](/v0.1/docs/modules/model_io/chat/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/chat/streaming/)
* [Caching](/v0.1/docs/modules/model_io/chat/caching/)
* [Custom chat models](/v0.1/docs/modules/model_io/chat/custom_chat/)
* [Tracking token usage](/v0.1/docs/modules/model_io/chat/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/chat/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/chat/dealing_with_api_errors/)
* [Dealing with rate limits](/v0.1/docs/modules/model_io/chat/dealing_with_rate_limits/)
* [Tool/function calling](/v0.1/docs/modules/model_io/chat/function_calling/)
* [Response metadata](/v0.1/docs/modules/model_io/chat/response_metadata/)
* [Subscribing to events](/v0.1/docs/modules/model_io/chat/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/chat/timeouts/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* Adding a timeout
Adding a timeout
================
By default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a `timeout` option, in milliseconds, when you call the model. For example, for OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const chat = new ChatOpenAI({ temperature: 1 });const response = await chat.invoke( [ new HumanMessage( "What is a good name for a company that makes colorful socks?" ), ], { timeout: 1000 } // 1s timeout);console.log(response);// AIMessage { text: '\n\nRainbow Sox Co.' }
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Subscribing to events
](/v0.1/docs/modules/model_io/chat/subscribing_events/)[
Next
Model I/O
](/v0.1/docs/modules/model_io/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/timeouts/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Adding a timeout
Adding a timeout
================
By default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a `timeout` option, in milliseconds, when you call the model. For example, for OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ temperature: 1 });const resA = await model.invoke( "What would be a good company name a company that makes colorful socks?", { timeout: 1000 } // 1s timeout);console.log({ resA });// '\n\nSocktastic Colors' }
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Subscribing to events
](/v0.1/docs/modules/model_io/llms/subscribing_events/)[
Next
Chat Models
](/v0.1/docs/modules/model_io/chat/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/quick_start/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Quick Start
On this page
Quick Start
===========
Large Language Models (LLMs) are a core component of LangChain. LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.
There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the `LLM` class is designed to provide a standard interface for all of them.
In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* OpenAI
* Local (using Ollama)
First we'll need to install the LangChain OpenAI integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:
export OPENAI_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `apiKey` named parameter when initiating the OpenAI Chat Model class:
import { OpenAI } from "@langchain/openai";const llm = new OpenAI({ apiKey: "YOUR_KEY_HERE",});
otherwise you can initialize with an empty object:
import { OpenAI } from "@langchain/openai";const llm = new OpenAI({});
[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2 and Mistral, locally.
First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:
* [Download](https://ollama.ai/download)
* Fetch a model via e.g. `ollama pull mistral`
Then, make sure the Ollama server is running. Next, you'll need to install the LangChain community package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
And then you can do:
import { Ollama } from "@langchain/community/llms/ollama";const llm = new Ollama({ baseUrl: "http://localhost:11434", // Default value model: "mistral",});
LCEL[β](#lcel "Direct link to LCEL")
------------------------------------
LLMs implement the [Runnable interface](/v0.1/docs/expression_language/interface/), the basic building block of the [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/). This means they support `invoke`, `stream`, `batch`, and `streamLog` calls.
LLMs accept **strings** as inputs, or objects which can be coerced to string prompts, including `BaseMessage[]` and `PromptValue`.
await llm.invoke( "What are some theories about the relationship between unemployment and inflation?");
'\n\n1. The Phillips Curve Theory: This suggests that there is an inverse relationship between unemployment and inflation, meaning that when unemployment is low, inflation will be higher, and when unemployment is high, inflation will be lower.\n\n2. The Monetarist Theory: This theory suggests that the relationship between unemployment and inflation is weak, and that changes in the money supply are more important in determining inflation.\n\n3. The Resource Utilization Theory: This suggests that when unemployment is low, firms are able to raise wages and prices in order to take advantage of the increased demand for their products and services. This leads to higher inflation.'
See the [Runnable interface](/v0.1/docs/expression_language/interface/) for more details on the available methods.
\[Legacy\] `generate`: batch calls, richer outputs[β](#legacy-generate-batch-calls-richer-outputs "Direct link to legacy-generate-batch-calls-richer-outputs")
--------------------------------------------------------------------------------------------------------------------------------------------------------------
`generate` lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:
const llmResult = await llm.generate( ["Tell me a joke", "Tell me a poem"], ["Tell me a joke", "Tell me a poem"]);console.log(llmResult.generations.length);// 30console.log(llmResult.generations[0]);/* [ { text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!", generationInfo: { finishReason: "stop", logprobs: null } } ]*/console.log(llmResult.generations[1]);/* [ { text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.", generationInfo: { finishReason: "stop", logprobs: null } } ]*/
You can also access provider specific information that is returned. This information is NOT standardized across providers.
console.log(llmResult.llmOutput);/* { tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 } }*/
Here's an example with additional parameters, which sets `-1` for `max_tokens` to turn on token size calculations:
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ // customize openai model that's used, `gpt-3.5-turbo-instruct` is the default model: "gpt-3.5-turbo-instruct", // `max_tokens` supports a magic -1 param where the max token length for the specified modelName // is calculated and included in the request to OpenAI as the `max_tokens` param maxTokens: -1, // use `modelKwargs` to pass params directly to the openai call // note that OpenAI uses snake_case instead of camelCase modelKwargs: { user: "me", }, // for additional logging for debugging purposes verbose: true,});const resA = await model.invoke( "What would be a good company name a company that makes colorful socks?");console.log({ resA });// { resA: '\n\nSocktastic Colors' }
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
LLMs
](/v0.1/docs/modules/model_io/llms/)[
Next
Streaming
](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Setup](#setup)
* [LCEL](#lcel)
* [Legacy `generate`: batch calls, richer outputs](#legacy-generate-batch-calls-richer-outputs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/streaming_llm/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Streaming
On this page
Streaming
=========
Some LLMs provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
Using `.stream()`[β](#using-stream "Direct link to using-stream")
-----------------------------------------------------------------
The easiest way to stream is to use the `.stream()` method. This returns an readable stream that you can also iterate over:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ maxTokens: 25,});const stream = await model.stream("Tell me a joke.");for await (const chunk of stream) { console.log(chunk);}/*Q: What did the fish say when it hit the wall?A: Dam!*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
For models that do not support streaming, the entire response will be returned as a single chunk.
Using a callback handler[β](#using-a-callback-handler "Direct link to Using a callback handler")
------------------------------------------------------------------------------------------------
You can also use a [`CallbackHandler`](https://github.com/langchain-ai/langchainjs/blob/main/langchain/src/callbacks/base.ts) like so:
import { OpenAI } from "@langchain/openai";// To enable streaming, we pass in `streaming: true` to the LLM constructor.// Additionally, we pass in a handler for the `handleLLMNewToken` event.const model = new OpenAI({ maxTokens: 25, streaming: true,});const response = await model.invoke("Tell me a joke.", { callbacks: [ { handleLLMNewToken(token: string) { console.log({ token }); }, }, ],});console.log(response);/*{ token: '\n' }{ token: '\n' }{ token: 'Q' }{ token: ':' }{ token: ' Why' }{ token: ' did' }{ token: ' the' }{ token: ' chicken' }{ token: ' cross' }{ token: ' the' }{ token: ' playground' }{ token: '?' }{ token: '\n' }{ token: 'A' }{ token: ':' }{ token: ' To' }{ token: ' get' }{ token: ' to' }{ token: ' the' }{ token: ' other' }{ token: ' slide' }{ token: '.' }Q: Why did the chicken cross the playground?A: To get to the other slide.*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
We still have access to the end `LLMResult` if using `generate`. However, `token_usage` is not currently supported for streaming.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quick Start
](/v0.1/docs/modules/model_io/llms/quick_start/)[
Next
Caching
](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Using `.stream()`](#using-stream)
* [Using a callback handler](#using-a-callback-handler)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/llm_caching/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Caching
On this page
Caching
=======
LangChain provides an optional caching layer for LLMs. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ model: "gpt-3.5-turbo-instruct", cache: true,});
In Memory Cache[β](#in-memory-cache "Direct link to In Memory Cache")
---------------------------------------------------------------------
The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.
console.time();// The first time, it is not yet in cache, so it should take longerconst res = await model.invoke("Tell me a long joke");console.log(res);console.timeEnd();/* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 4.187s*/
console.time();// The second time it is, so it goes fasterconst res2 = await model.invoke("Tell me a joke");console.log(res2);console.timeEnd();/* A man walks into a bar and sees a jar filled with money on the counter. Curious, he asks the bartender about it. The bartender explains, "We have a challenge for our customers. If you can complete three tasks, you win all the money in the jar." Intrigued, the man asks what the tasks are. The bartender replies, "First, you have to drink a whole bottle of tequila without making a face. Second, there's a pitbull out back with a sore tooth. You have to pull it out. And third, there's an old lady upstairs who has never had an orgasm. You have to give her one." The man thinks for a moment and then confidently says, "I'll do it." He grabs the bottle of tequila and downs it in one gulp, without flinching. He then heads to the back and after a few minutes of struggling, emerges with the pitbull's tooth in hand. The bar erupts in cheers and the bartender leads the man upstairs to the old lady's room. After a few minutes, the man walks out with a big smile on his face and the old lady is giggling with delight. The bartender hands the man the jar of money and asks, "How default: 175.74ms*/
Caching with Momento[β](#caching-with-momento "Direct link to Caching with Momento")
------------------------------------------------------------------------------------
LangChain also provides a Momento-based cache. [Momento](https://gomomento.com) is a distributed, serverless cache that requires zero setup or infrastructure maintenance. Given Momento's compatibility with Node.js, browser, and edge environments, ensure you install the relevant package.
To install for **Node.js**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk
yarn add @gomomento/sdk
pnpm add @gomomento/sdk
To install for **browser/edge workers**:
* npm
* Yarn
* pnpm
npm install @gomomento/sdk-web
yarn add @gomomento/sdk-web
pnpm add @gomomento/sdk-web
Next you'll need to sign up and create an API key. Once you've done that, pass a `cache` option when you instantiate the LLM like this:
import { OpenAI } from "@langchain/openai";import { CacheClient, Configurations, CredentialProvider,} from "@gomomento/sdk";import { MomentoCache } from "@langchain/community/caches/momento";// See https://github.com/momentohq/client-sdk-javascript for connection optionsconst client = new CacheClient({ configuration: Configurations.Laptop.v1(), credentialProvider: CredentialProvider.fromEnvironmentVariable({ environmentVariableName: "MOMENTO_API_KEY", }), defaultTtlSeconds: 60 * 60 * 24,});const cache = await MomentoCache.fromProps({ client, cacheName: "langchain",});const model = new OpenAI({ cache });
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [MomentoCache](https://api.js.langchain.com/classes/langchain_community_caches_momento.MomentoCache.html) from `@langchain/community/caches/momento`
Caching with Redis[β](#caching-with-redis "Direct link to Caching with Redis")
------------------------------------------------------------------------------
LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package:
* npm
* Yarn
* pnpm
npm install ioredis
yarn add ioredis
pnpm add ioredis
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { OpenAI } from "@langchain/openai";import { RedisCache } from "@langchain/community/caches/ioredis";import { Redis } from "ioredis";// See https://github.com/redis/ioredis for connection optionsconst client = new Redis({});const cache = new RedisCache(client);const model = new OpenAI({ cache });
Caching with Upstash Redis[β](#caching-with-upstash-redis "Direct link to Caching with Upstash Redis")
------------------------------------------------------------------------------------------------------
LangChain provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the `@upstash/redis` package:
* npm
* Yarn
* pnpm
npm install @upstash/redis
yarn add @upstash/redis
pnpm add @upstash/redis
You'll also need an [Upstash account](https://docs.upstash.com/redis#create-account) and a [Redis database](https://docs.upstash.com/redis#create-a-database) to connect to. Once you've done that, retrieve your REST URL and REST token.
Then, you can pass a `cache` option when you instantiate the LLM. For example:
import { OpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection optionsconst cache = new UpstashRedisCache({ config: { url: "UPSTASH_REDIS_REST_URL", token: "UPSTASH_REDIS_REST_TOKEN", },});const model = new OpenAI({ cache });
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [UpstashRedisCache](https://api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis`
You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance:
import { Redis } from "@upstash/redis";import https from "https";import { OpenAI } from "@langchain/openai";import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";// const client = new Redis({// url: process.env.UPSTASH_REDIS_REST_URL!,// token: process.env.UPSTASH_REDIS_REST_TOKEN!,// agent: new https.Agent({ keepAlive: true }),// });// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.const client = Redis.fromEnv({ agent: new https.Agent({ keepAlive: true }),});const cache = new UpstashRedisCache({ client });const model = new OpenAI({ cache });
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [UpstashRedisCache](https://api.js.langchain.com/classes/langchain_community_caches_upstash_redis.UpstashRedisCache.html) from `@langchain/community/caches/upstash_redis`
Caching with Cloudflare KV[β](#caching-with-cloudflare-kv "Direct link to Caching with Cloudflare KV")
------------------------------------------------------------------------------------------------------
info
This integration is only supported in Cloudflare Workers.
If you're deploying your project as a Cloudflare Worker, you can use LangChain's Cloudflare KV-powered LLM cache.
For information on how to set up KV in Cloudflare, see [the official documentation](https://developers.cloudflare.com/kv/).
**Note:** If you are using TypeScript, you may need to install types if they aren't already present:
* npm
* Yarn
* pnpm
npm install -S @cloudflare/workers-types
yarn add @cloudflare/workers-types
pnpm add @cloudflare/workers-types
import type { KVNamespace } from "@cloudflare/workers-types";import { OpenAI } from "@langchain/openai";import { CloudflareKVCache } from "@langchain/cloudflare";export interface Env { KV_NAMESPACE: KVNamespace; OPENAI_API_KEY: string;}export default { async fetch(_request: Request, env: Env) { try { const cache = new CloudflareKVCache(env.KV_NAMESPACE); const model = new OpenAI({ cache, model: "gpt-3.5-turbo-instruct", apiKey: env.OPENAI_API_KEY, }); const response = await model.invoke("How are you today?"); return new Response(JSON.stringify(response), { headers: { "content-type": "application/json" }, }); } catch (err: any) { console.log(err.message); return new Response(err.message, { status: 500 }); } },};
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [CloudflareKVCache](https://api.js.langchain.com/classes/langchain_cloudflare.CloudflareKVCache.html) from `@langchain/cloudflare`
Caching on the File System[β](#caching-on-the-file-system "Direct link to Caching on the File System")
------------------------------------------------------------------------------------------------------
danger
This cache is not recommended for production use. It is only intended for local development.
LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want.
const cache = await LocalFileCache.create();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Streaming
](/v0.1/docs/modules/model_io/llms/streaming_llm/)[
Next
Custom LLM
](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [In Memory Cache](#in-memory-cache)
* [Caching with Momento](#caching-with-momento)
* [Caching with Redis](#caching-with-redis)
* [Caching with Upstash Redis](#caching-with-upstash-redis)
* [Caching with Cloudflare KV](#caching-with-cloudflare-kv)
* [Caching on the File System](#caching-on-the-file-system)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/custom_llm/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Custom LLM
On this page
Custom LLM
==========
This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is directly supported in LangChain.
There are a few required things that a custom LLM needs to implement after extending the [`LLM` class](https://api.js.langchain.com/classes/langchain_core_language_models_llms.LLM.html):
* A `_call` method that takes in a string and call options (which includes things like `stop` sequences), and returns a string.
* A `_llmType` method that returns a string. Used for logging purposes only.
You can also implement the following optional method:
* A `_streamResponseChunks` method that returns an `AsyncIterator` and yields [`GenerationChunks`](https://api.js.langchain.com/classes/langchain_core_outputs.GenerationChunk.html). This allows the LLM to support streaming outputs.
Letβs implement a very simple custom LLM that just echoes back the first `n` characters of the input.
import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";import type { CallbackManagerForLLMRun } from "langchain/callbacks";import { GenerationChunk } from "langchain/schema";export interface CustomLLMInput extends BaseLLMParams { n: number;}export class CustomLLM extends LLM { n: number; constructor(fields: CustomLLMInput) { super(fields); this.n = fields.n; } _llmType() { return "custom"; } async _call( prompt: string, options: this["ParsedCallOptions"], runManager: CallbackManagerForLLMRun ): Promise<string> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); return prompt.slice(0, this.n); } async *_streamResponseChunks( prompt: string, options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): AsyncGenerator<GenerationChunk> { // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); for (const letter of prompt.slice(0, this.n)) { yield new GenerationChunk({ text: letter, }); // Trigger the appropriate callback await runManager?.handleLLMNewToken(letter); } }}
We can now use this as any other LLM:
const llm = new CustomLLM({ n: 4 });await llm.invoke("I am an LLM");
I am
And support streaming:
const stream = await llm.stream("I am an LLM");for await (const chunk of stream) { console.log(chunk);}
Iam
Richer outputs[β](#richer-outputs "Direct link to Richer outputs")
------------------------------------------------------------------
If you want to take advantage of LangChain's callback system for functionality like token tracking, you can extend the [`BaseLLM`](https://api.js.langchain.com/classes/langchain_core_language_models_llms.BaseLLM.html) class and implement the lower level `_generate` method. Rather than taking a single string as input and a single string output, it can take multiple input strings and map each to multiple string outputs. Additionally, it returns a `Generation` output with fields for additional metadata rather than just a string.
import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";import { LLMResult } from "@langchain/core/outputs";import { BaseLLM, BaseLLMCallOptions, BaseLLMParams,} from "@langchain/core/language_models/llms";export interface AdvancedCustomLLMCallOptions extends BaseLLMCallOptions {}export interface AdvancedCustomLLMParams extends BaseLLMParams { n: number;}export class AdvancedCustomLLM extends BaseLLM<AdvancedCustomLLMCallOptions> { n: number; constructor(fields: AdvancedCustomLLMParams) { super(fields); this.n = fields.n; } _llmType() { return "advanced_custom_llm"; } async _generate( inputs: string[], options: this["ParsedCallOptions"], runManager?: CallbackManagerForLLMRun ): Promise<LLMResult> { const outputs = inputs.map((input) => input.slice(0, this.n)); // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing // await subRunnable.invoke(params, runManager?.getChild()); // One input could generate multiple outputs. const generations = outputs.map((output) => [ { text: output, // Optional additional metadata for the generation generationInfo: { outputCount: 1 }, }, ]); const tokenUsage = { usedTokens: this.n, }; return { generations, llmOutput: { tokenUsage }, }; }}
This will pass the additional returned information in callback events and in the \`streamEvents method:
const llm = new AdvancedCustomLLM({ n: 4 });const eventStream = await llm.streamEvents("I am an LLM", { version: "v1",});for await (const event of eventStream) { if (event.event === "on_llm_end") { console.log(JSON.stringify(event, null, 2)); }}
{ "event": "on_llm_end", "name": "AdvancedCustomLLM", "run_id": "a883a705-c651-4236-8095-cb515e2d4885", "tags": [], "metadata": {}, "data": { "output": { "generations": [ [ { "text": "I am", "generationInfo": { "outputCount": 1 } } ] ], "llmOutput": { "tokenUsage": { "usedTokens": 4 } } } }}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Caching
](/v0.1/docs/modules/model_io/llms/llm_caching/)[
Next
Tracking token usage
](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Richer outputs](#richer-outputs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/token_usage_tracking/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Tracking token usage
Tracking token usage
====================
This notebook goes over how to track your token usage for specific calls. This is currently only implemented for the OpenAI API.
Here's an example of tracking token usage for a single LLM call:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-turbo",});const res = await chatModel.invoke("Tell me a joke.");console.log(res.response_metadata);/* { tokenUsage: { completionTokens: 15, promptTokens: 12, totalTokens: 27 }, finish_reason: 'stop' }*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
If this model is passed to a chain or agent that calls it multiple times, it will log an output each time.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Custom LLM
](/v0.1/docs/modules/model_io/llms/custom_llm/)[
Next
Cancelling requests
](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Dealing with API Errors
Dealing with API Errors
=======================
If the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a `maxRetries` option when you instantiate the model. For example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ maxRetries: 10 });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cancelling requests
](/v0.1/docs/modules/model_io/llms/cancelling_requests/)[
Next
Dealing with Rate Limits
](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/cancelling_requests/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Cancelling requests
Cancelling requests
===================
You can cancel a request by passing a `signal` option when you call the model. For example, for OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ temperature: 1 });const controller = new AbortController();// Call `controller.abort()` somewhere to cancel the request.const res = await model.invoke( "What would be a good name for a company that makes colorful socks?", { signal: controller.signal });console.log(res);/*'\n\nSocktastic Colors'*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
Note, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Tracking token usage
](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)[
Next
Dealing with API Errors
](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Dealing with Rate Limits
Dealing with Rate Limits
========================
Some LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a `maxConcurrency` option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.
For example, if you set `maxConcurrency: 5`, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.
To use this feature, simply pass `maxConcurrency: <number>` when you instantiate the LLM. For example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";const model = new OpenAI({ maxConcurrency: 5 });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Dealing with API Errors
](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)[
Next
Subscribing to events
](/v0.1/docs/modules/model_io/llms/subscribing_events/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/llms/subscribing_events/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Quick Start](/v0.1/docs/modules/model_io/llms/quick_start/)
* [Streaming](/v0.1/docs/modules/model_io/llms/streaming_llm/)
* [Caching](/v0.1/docs/modules/model_io/llms/llm_caching/)
* [Custom LLM](/v0.1/docs/modules/model_io/llms/custom_llm/)
* [Tracking token usage](/v0.1/docs/modules/model_io/llms/token_usage_tracking/)
* [Cancelling requests](/v0.1/docs/modules/model_io/llms/cancelling_requests/)
* [Dealing with API Errors](/v0.1/docs/modules/model_io/llms/dealing_with_api_errors/)
* [Dealing with Rate Limits](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)
* [Subscribing to events](/v0.1/docs/modules/model_io/llms/subscribing_events/)
* [Adding a timeout](/v0.1/docs/modules/model_io/llms/timeouts/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* Subscribing to events
Subscribing to events
=====================
Especially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:
For more info on the events available see the [Callbacks](/v0.1/docs/modules/callbacks/) section of the docs.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { OpenAI } from "@langchain/openai";import type { Serialized } from "@langchain/core/load/serializable";import { LLMResult } from "@langchain/core/outputs";// We can pass in a list of CallbackHandlers to the LLM constructor to get callbacks for various events.const model = new OpenAI({ callbacks: [ { handleLLMStart: async (llm: Serialized, prompts: string[]) => { console.log(JSON.stringify(llm, null, 2)); console.log(JSON.stringify(prompts, null, 2)); }, handleLLMEnd: async (output: LLMResult) => { console.log(JSON.stringify(output, null, 2)); }, handleLLMError: async (err: Error) => { console.error(err); }, }, ],});await model.invoke( "What would be a good company name a company that makes colorful socks?");// {// "name": "openai"// }// [// "What would be a good company name a company that makes colorful socks?"// ]// {// "generations": [// [// {// "text": "\n\nSocktastic Splashes.",// "generationInfo": {// "finishReason": "stop",// "logprobs": null// }// }// ]// ],// "llmOutput": {// "tokenUsage": {// "completionTokens": 9,// "promptTokens": 14,// "totalTokens": 23// }// }// }
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [Serialized](https://api.js.langchain.com/types/langchain_core_load_serializable.Serialized.html) from `@langchain/core/load/serializable`
* [LLMResult](https://api.js.langchain.com/types/langchain_core_outputs.LLMResult.html) from `@langchain/core/outputs`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Dealing with Rate Limits
](/v0.1/docs/modules/model_io/llms/dealing_with_rate_limits/)[
Next
Adding a timeout
](/v0.1/docs/modules/model_io/llms/timeouts/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/chatbots/memory_management/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Quickstart](/v0.1/docs/use_cases/chatbots/quickstart/)
* [Memory management](/v0.1/docs/use_cases/chatbots/memory_management/)
* [Retrieval](/v0.1/docs/use_cases/chatbots/retrieval/)
* [Tool usage](/v0.1/docs/use_cases/chatbots/tool_usage/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* Memory management
On this page
Memory management
=================
A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:
* Simply stuffing previous messages into a chat model prompt.
* The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.
* More complex modifications like synthesizing summaries for long running conversations.
Weβll go into more detail on a few techniques below!
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Youβll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Letβs also set up a chat model that weβll use for the below examples:
import { ChatOpenAI } from "@langchain/openai";const chat = new ChatOpenAI({ model: "gpt-3.5-turbo-1106",});
Message passing[β](#message-passing "Direct link to Message passing")
---------------------------------------------------------------------
The simplest form of memory is simply passing chat history messages into a chain. Hereβs an example:
import { HumanMessage, AIMessage } from "@langchain/core/messages";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("messages"),]);const chain = prompt.pipe(chat);await chain.invoke({ messages: [ new HumanMessage( "Translate this sentence from English to French: I love programming." ), new AIMessage("J'adore la programmation."), new HumanMessage("What did you just say?"), ],});
AIMessage { content: `I said "J'adore la programmation" which means "I love programming" in French.`}
We can see that by passing the previous conversation into a chain, it can use it as context to answer questions. This is the basic concept underpinning chatbot memory - the rest of the guide will demonstrate convenient techniques for passing or reformatting messages.
Chat history[β](#chat-history "Direct link to Chat history")
------------------------------------------------------------
Itβs perfectly fine to store and pass messages directly as an array, but we can use LangChainβs built-in message history class to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/v0.1/docs/integrations/chat_memory/) - but for this demo we will use an ephemeral demo class.
Hereβs an example of the API:
import { ChatMessageHistory } from "langchain/stores/message/in_memory";const demoEphemeralChatMessageHistory = new ChatMessageHistory();await demoEphemeralChatMessageHistory.addMessage( new HumanMessage( "Translate this sentence from English to French: I love programming." ));await demoEphemeralChatMessageHistory.addMessage( new AIMessage("J'adore la programmation."));await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { content: 'Translate this sentence from English to French: I love programming.' }, AIMessage { content: 'J'adore la programmation.' }]
We can use it directly to store conversation turns for our chain:
await demoEphemeralChatMessageHistory.clear();const input1 = "Translate this sentence from English to French: I love programming.";await demoEphemeralChatMessageHistory.addMessage(new HumanMessage(input1));const response = await chain.invoke({ messages: await demoEphemeralChatMessageHistory.getMessages(),});await demoEphemeralChatMessageHistory.addMessage(response);const input2 = "What did I just ask you?";await demoEphemeralChatMessageHistory.addMessage(new HumanMessage(input2));await chain.invoke({ messages: await demoEphemeralChatMessageHistory.getMessages(),});
AIMessage { content: 'You just asked for the translation of the sentence "I love programming" from English to French.'}
Automatic history management[β](#automatic-history-management "Direct link to Automatic history management")
------------------------------------------------------------------------------------------------------------
The previous examples pass messages to the chain explicitly. This is a completely acceptable approach, but it does require external management of new messages. LangChain also includes an wrapper for LCEL chains that can handle this process automatically called `RunnableWithMessageHistory`.
To show how it works, letβs slightly modify the above prompt to take a final `input` variable that populates a `HumanMessage` template after the chat history. This means that we will expect a `chat_history` parameter that contains all messages BEFORE the current messages instead of all messages:
const runnableWithMessageHistoryPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"],]);const chain2 = runnableWithMessageHistoryPrompt.pipe(chat);
Weβll pass the latest input to the conversation here and let the `RunnableWithMessageHistory` class wrap our chain and do the work of appending that `input` variable to the chat history.
Next, letβs declare our wrapped chain:
import { RunnableWithMessageHistory } from "@langchain/core/runnables";const demoEphemeralChatMessageHistoryForChain = new ChatMessageHistory();const chainWithMessageHistory = new RunnableWithMessageHistory({ runnable: chain2, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistoryForChain, inputMessagesKey: "input", historyMessagesKey: "chat_history",});
This class takes a few parameters in addition to the chain that we want to wrap:
* A factory function that returns a message history for a given session id. This allows your chain to handle multiple users at once by loading different messages for different conversations.
* An `inputMessagesKey` that specifies which part of the input should be tracked and stored in the chat history. In this example, we want to track the string passed in as input.
* A `historyMessagesKey` that specifies what the previous messages should be injected into the prompt as. Our prompt has a `MessagesPlaceholder` named `chat_history`, so we specify this property to match. (For chains with multiple outputs) an `outputMessagesKey` which specifies which output to store as history. This is the inverse of `inputMessagesKey`.
We can invoke this new chain as normal, with an additional `configurable` field that specifies the particular `sessionId` to pass to the factory function. This is unused for the demo, but in real-world chains, youβll want to return a chat history corresponding to the passed session:
await chainWithMessageHistory.invoke( { input: "Translate this sentence from English to French: I love programming.", }, { configurable: { sessionId: "unused" } });
AIMessage { content: `The translation of "I love programming" to French is "J'adore la programmation."`}
await chainWithMessageHistory.invoke( { input: "What did I just ask you?", }, { configurable: { sessionId: "unused" } });
AIMessage { content: 'You just asked me to translate the sentence "I love programming" from English to French.'}
Modifying chat history[β](#modifying-chat-history "Direct link to Modifying chat history")
------------------------------------------------------------------------------------------
Modifying stored chat messages can help your chatbot handle a variety of situations. Here are some examples:
### Trimming messages[β](#trimming-messages "Direct link to Trimming messages")
LLMs and chat models have limited context windows, and even if youβre not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is to only load and store the most recent `n` messages. Letβs use an example history with some preloaded messages:
await demoEphemeralChatMessageHistory.clear();await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("Hey there! I'm Nemo."));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Hello!"));await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("How are you today?"));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Fine thanks!"));await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { content: "Hey there! I'm Nemo." }, AIMessage { content: 'Hello!' }, HumanMessage { content: 'How are you today?' }, AIMessage { content: 'Fine thanks!' }]
Letβs use this message history with the `RunnableWithMessageHistory` chain we declared above:
const chainWithMessageHistory2 = new RunnableWithMessageHistory({ runnable: chain2, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});await chainWithMessageHistory2.invoke( { input: "What's my name?", }, { configurable: { sessionId: "unused" } });
AIMessage { content: 'Your name is Nemo.'}
We can see the chain remembers the preloaded name.
But letβs say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the `clear` method to remove messages and re-add them to the history. We donβt have to, but letβs put this method at the front of our chain to ensure itβs always called:
import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const trimMessages = async (_chainInput: Record<string, any>) => { const storedMessages = await demoEphemeralChatMessageHistory.getMessages(); if (storedMessages.length <= 2) { return false; } await demoEphemeralChatMessageHistory.clear(); for (const message of storedMessages.slice(-2)) { demoEphemeralChatMessageHistory.addMessage(message); } return true;};const chainWithTrimming = RunnableSequence.from([ RunnablePassthrough.assign({ messages_trimmed: trimMessages }), chainWithMessageHistory2,]);
Letβs call this new chain and check the messages afterwards:
await chainWithTrimming.invoke( { input: "Where does P. Sherman live?", }, { configurable: { sessionId: "unused" } });
AIMessage { content: 'P. Sherman lives at 42 Wallaby Way, Sydney.'}
await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { content: "What's my name?" }, AIMessage { content: 'Your name is Nemo.' }, HumanMessage { content: 'Where does P. Sherman live?' }, AIMessage { content: 'P. Sherman lives at 42 Wallaby Way, Sydney.' }]
And we can see that our history has removed the two oldest messages while still adding the most recent conversation at the end. The next time the chain is called, `trimMessages` will be called again, and only the two most recent messages will be passed to the model. In this case, this means that the model will forget the name we gave it the next time we invoke it:
await chainWithTrimming.invoke( { input: "What is my name?", }, { configurable: { sessionId: "unused" } });
AIMessage { content: "I'm sorry, I don't have access to your personal information."}
await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { content: 'Where does P. Sherman live?' }, AIMessage { content: 'P. Sherman lives at 42 Wallaby Way, Sydney.' }, HumanMessage { content: 'What is my name?' }, AIMessage { content: "I'm sorry, I don't have access to your personal information." }]
### Summary memory[β](#summary-memory "Direct link to Summary memory")
We can use this same pattern in other ways too. For example, we could use an additional LLM call to generate a summary of the conversation before calling our chain. Letβs recreate our chat history and chatbot chain:
await demoEphemeralChatMessageHistory.clear();await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("Hey there! I'm Nemo."));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Hello!"));await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("How are you today?"));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("Fine thanks!"));
const runnableWithSummaryMemoryPrompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"],]);const summaryMemoryChain = runnableWithSummaryMemoryPrompt.pipe(chat);const chainWithMessageHistory3 = new RunnableWithMessageHistory({ runnable: summaryMemoryChain, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory, inputMessagesKey: "input", historyMessagesKey: "chat_history",});
And now, letβs create a function that will distill previous interactions into a summary. We can add this one to the front of the chain too:
const summarizeMessages = async (_chainInput: Record<string, any>) => { const storedMessages = await demoEphemeralChatMessageHistory.getMessages(); if (storedMessages.length === 0) { return false; } const summarizationPrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("chat_history"), [ "user", "Distill the above chat messages into a single summary message. Include as many specific details as you can.", ], ]); const summarizationChain = summarizationPrompt.pipe(chat); const summaryMessage = await summarizationChain.invoke({ chat_history: storedMessages, }); await demoEphemeralChatMessageHistory.clear(); demoEphemeralChatMessageHistory.addMessage(summaryMessage); return true;};const chainWithSummarization = RunnableSequence.from([ RunnablePassthrough.assign({ messages_summarized: summarizeMessages, }), chainWithMessageHistory3,]);
Letβs see if it remembers the name we gave it:
await chainWithSummarization.invoke( { input: "What did I say my name was?", }, { configurable: { sessionId: "unused" }, });
AIMessage { content: 'Your name is "Nemo." How can I assist you today, Nemo?'}
await demoEphemeralChatMessageHistory.getMessages();
[ AIMessage { content: 'In the conversation, Nemo introduces himself and asks how the other person is doing. The other person responds that they are fine.' }, HumanMessage { content: 'What did I say my name was?' }, AIMessage { content: 'Your name is "Nemo." How can I assist you today, Nemo?' }]
Note that invoking the chain again will generate another summary generated from the initial summary plus new messages and so on. You could also design a hybrid approach where a certain number of messages are retained in chat history while others are summarized.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/use_cases/chatbots/quickstart/)[
Next
Retrieval
](/v0.1/docs/use_cases/chatbots/retrieval/)
* [Setup](#setup)
* [Message passing](#message-passing)
* [Chat history](#chat-history)
* [Automatic history management](#automatic-history-management)
* [Modifying chat history](#modifying-chat-history)
* [Trimming messages](#trimming-messages)
* [Summary memory](#summary-memory)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/chatbots/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Quickstart](/v0.1/docs/use_cases/chatbots/quickstart/)
* [Memory management](/v0.1/docs/use_cases/chatbots/memory_management/)
* [Retrieval](/v0.1/docs/use_cases/chatbots/retrieval/)
* [Tool usage](/v0.1/docs/use_cases/chatbots/tool_usage/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* Quickstart
On this page
Quickstart
==========
Overview[β](#overview "Direct link to Overview")
------------------------------------------------
We'll go over an example of how to design and implement an LLM-powered chatbot. Here are a few of the high-level components we'll be working with:
* `Chat Models`. The chatbot interface is based around messages rather than raw text, and therefore is best suited to Chat Models rather than text LLMs. See [here](/v0.1/docs/integrations/chat/) for a list of chat model integrations and [here](/v0.1/docs/modules/model_io/chat/) for documentation on the chat model interface in LangChain. You can use `LLMs` (see [here](/v0.1/docs/modules/model_io/llms/)) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.
* `Prompt Templates`, which simplify the process of assembling prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.
* `Chat History`, which allows a chatbot to "remember" past interactions and take them into account when responding to followup questions. [See here](/v0.1/docs/modules/memory/chat_messages/) for more information.
* `Retrievers` (optional), which are useful if you want to build a chatbot that can use domain-specific, up-to-date knowledge as context to augment its responses. [See here](/v0.1/docs/modules/data_connection/retrievers/) for in-depth documentation on retrieval systems.
We'll cover how to fit the above components together to create a powerful conversational chatbot.
Quickstart[β](#quickstart-1 "Direct link to Quickstart")
--------------------------------------------------------
We'll use OpenAI for this quickstart. Install the integration package and set a `OPENAI_API_KEY` environment variable:
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Letβs initialize the chat model which will serve as the chatbotβs brain:
import { ChatOpenAI } from "@langchain/openai";const chat = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0.2,});
If we invoke our chat model, the output is an AIMessage:
import { HumanMessage } from "@langchain/core/messages";await chat.invoke([ new HumanMessage( "Translate this sentence from English to French: I love programming." ),]);
AIMessage { content: 'J'adore la programmation.'}
The model on its own does not have any concept of state. For example, if you ask a followup question:
await chat.invoke([new HumanMessage("What did you just say?")]);
AIMessage { content: 'I said, "What did you just say?"'}
We can see that it doesnβt take the previous conversation turn into context, and cannot answer the question.
To get around this, we need to pass the entire conversation history into the model. Letβs see what happens when we do that:
import { AIMessage } from "@langchain/core/messages";await chat.invoke([ new HumanMessage( "Translate this sentence from English to French: I love programming." ), new AIMessage("J'adore la programmation."), new HumanMessage("What did you just say?"),]);
AIMessage { content: `I said, "J'adore la programmation," which means "I love programming" in French.`}
And now we can see that we get a good response!
This is the basic idea underpinning a chatbotβs ability to interact conversationally.
Prompt templates[β](#prompt-templates "Direct link to Prompt templates")
------------------------------------------------------------------------
Letβs define a prompt template to make formatting a bit easier. We can create a chain by piping it into the model:
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. Answer all questions to the best of your ability.", ], new MessagesPlaceholder("messages"),]);const chain = prompt.pipe(chat);
The `MessagesPlaceholder` above inserts chat messages passed into the chainβs input as `messages` directly into the prompt. Then, we can invoke the chain like this:
await chain.invoke({ messages: [ new HumanMessage( "Translate this sentence from English to French: I love programming." ), new AIMessage("J'adore la programmation."), new HumanMessage("What did you just say?"), ],});
AIMessage { content: `I said, "J'adore la programmation," which means "I love programming" in French.`}
Message history[β](#message-history "Direct link to Message history")
---------------------------------------------------------------------
As a shortcut for managing the chat history, we can use a [`MessageHistory`](/v0.1/docs/modules/memory/chat_messages/) class, which is responsible for saving and loading chat messages. There are many built-in message history integrations that persist messages to a variety of databases, but for this quickstart weβll use a in-memory, demo message history called `ChatMessageHistory`.
Hereβs an example of using it directly:
import { ChatMessageHistory } from "langchain/stores/message/in_memory";const demoEphemeralChatMessageHistory = new ChatMessageHistory();await demoEphemeralChatMessageHistory.addMessage(new HumanMessage("hi!"));await demoEphemeralChatMessageHistory.addMessage(new AIMessage("whats up?"));await demoEphemeralChatMessageHistory.getMessages();
[ HumanMessage { content: 'hi!' }, AIMessage { content: 'whats up?' }]
Once we do that, we can pass the stored messages directly into our chain as a parameter:
await demoEphemeralChatMessageHistory.addMessage( new HumanMessage( "Translate this sentence from English to French: I love programming." ));const responseMessage = await chain.invoke({ messages: await demoEphemeralChatMessageHistory.getMessages(),});console.log(responseMessage);
AIMessage { content: 'The translation of "I love programming" in French is "J\'adore la programmation'}
await demoEphemeralChatMessageHistory.addMessage(responseMessage);await demoEphemeralChatMessageHistory.addMessage( new HumanMessage("What did you just say?"));const responseMessage2 = await chain.invoke({ messages: await demoEphemeralChatMessageHistory.getMessages(),});console.log(responseMessage2);
AIMessage { content: `I said, "J'adore la programmation," which means "I love programming" in French.`}
And now we have a basic chatbot!
While this chain can serve as a useful chatbot on its own with just the modelβs internal knowledge, itβs often useful to introduce some form of `retrieval-augmented generation`, or RAG for short, over domain-specific knowledge to make our chatbot more focused. Weβll cover this next.
Retrievers[β](#retrievers "Direct link to Retrievers")
------------------------------------------------------
We can set up and use a [`Retriever`](/v0.1/docs/modules/data_connection/retrievers/) to pull domain-specific knowledge for our chatbot. To show this, letβs expand the simple chatbot we created above to be able to answer questions about LangSmith.
Weβll use [the LangSmith documentation](https://docs.smith.langchain.com/user_guide) as source material and store it in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/v0.1/docs/use_cases/question_answering/).
Letβs set up our retriever. First, weβll install some required deps:
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Next, weβll use a document loader to pull data from a webpage:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const rawDocs = await loader.load();
Next, we split it into smaller chunks that the LLMβs context window can handle and store it in a vector database:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const allSplits = await textSplitter.splitDocuments(rawDocs);
Then we embed and store those chunks in a vector database:
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( allSplits, new OpenAIEmbeddings());
And finally, letβs create a retriever from our initialized vectorstore:
const retriever = vectorstore.asRetriever(4);const docs = await retriever.invoke("how can langsmith help with testing?");console.log(docs);
[ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by defaultβAt LangChain, all of us have LangSmithβs tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }]
We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions.
### Handling documents[β](#handling-documents "Direct link to Handling documents")
Letβs modify our previous prompt to accept documents as context. Weβll use a `createStuffDocumentsChain` helper function to βstuffβ all of the input documents into the prompt, which also conveniently handles formatting. Other arguments (like messages) will be passed directly through into the prompt:
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], new MessagesPlaceholder("messages"),]);const documentChain = await createStuffDocumentsChain({ llm: chat, prompt: questionAnsweringPrompt,});
We can invoke this `documentChain` with the raw documents we retrieved above:
const demoEphemeralChatMessageHistory2 = new ChatMessageHistory();await demoEphemeralChatMessageHistory2.addMessage( new HumanMessage("how can langsmith help with testing?"));await documentChain.invoke({ messages: await demoEphemeralChatMessageHistory2.getMessages(), context: docs,});
LangSmith can help with testing in several ways. It allows you to quickly edit examples and add them to datasets, expanding the surface area of your evaluation sets. This can help fine-tune a model for improved quality or reduced costs. Additionally, LangSmith simplifies the construction of small datasets by hand, which can be useful for rigorous testing of changes. It also provides tracing and monitoring capabilities to visualize latency, log all traces, and troubleshoot specific issues as they arise, ensuring that your application is performing well and reliable enough for production use.
Awesome! We see an answer synthesized from information in the input documents.
### Creating a retrieval chain[β](#creating-a-retrieval-chain "Direct link to Creating a retrieval chain")
Next, letβs integrate our retriever into the chain. Our retriever should retrieve information relevant to the last message we pass in from the user, so we extract it and use that as input to fetch relevant docs, which we add to the current chain as `context`. We pass `context` plus the previous `messages` into our document chain to generate a final answer.
We also use the `RunnablePassthrough.assign()` method to pass intermediate steps through at each invocation. Hereβs what it looks like:
import type { BaseMessage } from "@langchain/core/messages";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const parseRetrieverInput = (params: { messages: BaseMessage[] }) => { return params.messages[params.messages.length - 1].content;};const retrievalChain = RunnablePassthrough.assign({ context: RunnableSequence.from([parseRetrieverInput, retriever]),}).assign({ answer: documentChain,});
const response3 = await retrievalChain.invoke({ messages: await demoEphemeralChatMessageHistory2.getMessages(),});console.log(response3);
{ messages: [ HumanMessage { content: 'how can langsmith help with testing?' } ], context: [ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: [Object] }, Document { pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by defaultβAt LangChain, all of us have LangSmithβs tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish', metadata: [Object] }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: [Object] } ], answer: 'LangSmith can help with testing by allowing you to quickly edit examples and add them to datasets, expanding the surface area of your evaluation sets. This can help in fine-tuning a model for improved quality or reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, which can be useful for rigorous testing of changes in your application. It also facilitates monitoring of your application, allowing you to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}
await demoEphemeralChatMessageHistory2.addMessage( new AIMessage(response3.answer));await demoEphemeralChatMessageHistory2.addMessage( new HumanMessage("tell me more about that!"));await retrievalChain.invoke({ messages: await demoEphemeralChatMessageHistory2.getMessages(),});
{ messages: [ HumanMessage { content: 'how can langsmith help with testing?' }, AIMessage { content: 'LangSmith can help with testing by allowing you to quickly edit examples and add them to datasets, expanding the surface area of your evaluation sets. This can help in fine-tuning a model for improved quality or reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, which can be useful for rigorous testing of changes in your application. It also facilitates monitoring of your application, allowing you to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.' }, HumanMessage { content: 'tell me more about that!' } ], context: [ Document { pageContent: 'shadowRing,', metadata: [Object] }, Document { pageContent: 'however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata: [Object] }, Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript environments through process.env1.The benefit here is that all calls to LLMs, chains, agents, tools, and retrievers are logged to LangSmith. Around 90% of the time we donβt even look at the traces, but the 10% of the time that we doβ¦ itβs so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow', metadata: [Object] } ], answer: 'LangSmith provides a platform for monitoring and debugging your application during testing. It allows you to log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. This can be particularly helpful in identifying and addressing unexpected end results, looping agents, slower-than-expected chains, and other issues that may arise during testing. By using LangSmith, you can gain insights into the performance and behavior of your application, enabling you to make necessary adjustments and improvements for a more reliable and high-quality application.'}
Nice! Our chatbot can now answer domain-specific questions in a conversational way.
As an aside, if you donβt want to return all the intermediate steps, you can define your retrieval chain like this using a pipe directly into the document chain instead of the final `.assign()` call:
const retrievalChainWithOnlyAnswer = RunnablePassthrough.assign({ context: RunnableSequence.from([parseRetrieverInput, retriever]),}).pipe(documentChain);await retrievalChainWithOnlyAnswer.invoke({ messages: await demoEphemeralChatMessageHistory2.getMessages(),});
LangSmith provides the capability to monitor your application by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. This monitoring feature allows you to track the performance of your application and identify any unexpected behavior or issues. Additionally, LangSmith can be used to debug various scenarios, such as unexpected end results, looping agents, or slower-than-expected chains, providing valuable insights and assistance in optimizing the performance and reliability of your application.
Query transformation[β](#query-transformation "Direct link to Query transformation")
------------------------------------------------------------------------------------
Thereβs one more optimization weβll cover here. In the above example, when we asked a followup question, `tell me more about that!`, you might notice that the retrieved docs donβt directly include information about testing. This is because weβre passing `tell me more about that!` verbatim as a query to the retriever. The output in the retrieval chain is still okay because the document chain retrieval chain can generate an answer based on the chat history, but we could be retrieving more rich and informative documents:
await retriever.invoke("how can langsmith help with testing?");
[ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by defaultβAt LangChain, all of us have LangSmithβs tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }]
await retriever.invoke("tell me more about that!");
[ Document { pageContent: 'shadowRing,', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript environments through process.env1.The benefit here is that all calls to LLMs, chains, agents, tools, and retrievers are logged to LangSmith. Around 90% of the time we donβt even look at the traces, but the 10% of the time that we doβ¦ itβs so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }]
To get around this common problem, letβs add a "query transformation" step that removes references from the input. Weβll wrap our old retriever as follows:
import { RunnableBranch } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const queryTransformPrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("messages"), [ "user", "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.", ],]);const queryTransformingRetrieverChain = RunnableBranch.from([ [ (params: { messages: BaseMessage[] }) => params.messages.length > 0, RunnableSequence.from([parseRetrieverInput, retriever]), ], queryTransformPrompt .pipe(chat) .pipe(new StringOutputParser()) .pipe(retriever),]).withConfig({ runName: "chat_retriever_chain" });
Above, we handle pass initial queries directly to the retriever as before, but we handle followup questions by rephrasing them according to a prompt. This removes references to chat history, which the retriever is unaware of.
Now letβs recreate our earlier chain with this new `queryTransformingRetrieverChain`. Note that this new chain accepts a dict as input and parses a string to pass to the retriever, so we donβt have to do additional parsing at the top level:
const conversationalRetrievalChain = RunnablePassthrough.assign({ context: queryTransformingRetrieverChain,}).assign({ answer: documentChain,});const demoEphemeralChatMessageHistory3 = new ChatMessageHistory();
And finally, letβs invoke it!
await demoEphemeralChatMessageHistory3.addMessage( new HumanMessage("how can langsmith help with testing?"));const response4 = await conversationalRetrievalChain.invoke({ messages: await demoEphemeralChatMessageHistory3.getMessages(),});console.log(response4);
{ messages: [ HumanMessage { content: 'how can langsmith help with testing?', } ], context: [ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: [Object] }, Document { pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by defaultβAt LangChain, all of us have LangSmithβs tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish', metadata: [Object] }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: [Object] } ], answer: 'LangSmith can help with testing in several ways. It allows you to quickly edit examples and add them to datasets, which expands the surface area of your evaluation sets. This can be useful for fine-tuning a model for improved quality or reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, which can be valuable for rigorous testing of changes. It also provides tracing capabilities to monitor your application, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Overall, LangSmith helps in testing by facilitating dataset construction, monitoring, and debugging.'}
await demoEphemeralChatMessageHistory3.addMessage( new AIMessage(response4.answer));await demoEphemeralChatMessageHistory3.addMessage( new HumanMessage("tell me more about that!"));await conversationalRetrievalChain.invoke({ messages: await demoEphemeralChatMessageHistory3.getMessages(),});
{ messages: [ HumanMessage { content: 'how can langsmith help with testing?' }, AIMessage { content: 'LangSmith can help with testing in several ways. It allows you to quickly edit examples and add them to datasets, which expands the surface area of your evaluation sets. This can be useful for fine-tuning a model for improved quality or reduced costs. Additionally, LangSmith simplifies the process of constructing small datasets by hand, which can be valuable for rigorous testing of changes. It also provides tracing capabilities to monitor your application, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Overall, LangSmith helps in testing by facilitating dataset construction, monitoring, and debugging.' }, HumanMessage { content: 'tell me more about that!' } ], context: [ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata: [Object] }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: [Object] }, Document { pageContent: 'datasetsβLangSmith makes it easy to curate datasets. However, these arenβt just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime environments without access to the environment, check out the FAQs.β©PreviousLangSmithNextTracingOn by defaultDebuggingWhat was the exact input to the LLM?If I edit the prompt, how does', metadata: [Object] } ], answer: 'Certainly! LangSmith simplifies the process of constructing datasets by allowing you to quickly edit examples and add them to datasets. This is valuable for expanding the surface area of your evaluation sets, which can lead to improved model quality or reduced costs. Additionally, LangSmith provides tracing capabilities, allowing you to monitor your application, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. This monitoring functionality helps ensure that your application is performing well and allows for rigorous testing of changes. Furthermore, LangSmith enables the curation of datasets, which can be exported for use in other contexts, such as OpenAI Evals or fine-tuning with platforms like FireworksAI. Overall, LangSmith offers a comprehensive set of tools for testing, monitoring, and dataset management.'}
To help you understand whatβs happening internally, [this LangSmith trace](https://smith.langchain.com/public/abfecedf-bfe8-4f56-87dc-2be8b12c9add/r) shows the first invocation. You can see that the userβs initial query is passed directly to the retriever, which return suitable docs.
The invocation for followup question, illustrated by [this LangSmith trace](https://smith.langchain.com/public/f832b529-9bbb-4108-a590-d60770152ad9/r), rephrases the userβs initial question to something more relevant to testing with LangSmith, resulting in higher quality docs.
And we now have a chatbot capable of conversational retrieval!
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
You now know how to build a conversational chatbot that can integrate past messages and domain-specific knowledge into its generations. There are many other optimizations you can make around this - check out the following pages for more information:
* [Memory management](/v0.1/docs/use_cases/chatbots/memory_management/): This includes a guide on automatically updating chat history, as well as trimming, summarizing, or otherwise modifying long conversations to keep your bot focused.
* [Retrieval](/v0.1/docs/use_cases/chatbots/retrieval/): A deeper dive into using different types of retrieval with your chatbot.
* [Tool usage](/v0.1/docs/use_cases/chatbots/tool_usage/): How to allows your chatbots to use tools that interact with other APIs and systems.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Chatbots
](/v0.1/docs/use_cases/chatbots/)[
Next
Memory management
](/v0.1/docs/use_cases/chatbots/memory_management/)
* [Overview](#overview)
* [Quickstart](#quickstart-1)
* [Prompt templates](#prompt-templates)
* [Message history](#message-history)
* [Retrievers](#retrievers)
* [Handling documents](#handling-documents)
* [Creating a retrieval chain](#creating-a-retrieval-chain)
* [Query transformation](#query-transformation)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/pipeline/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/)
* [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/)
* [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/)
* [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/)
* [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* Composition
Composition
===========
This notebook goes over how to compose multiple prompts together. This can be useful when you want to reuse parts of prompts. This can be done with a PipelinePrompt. A PipelinePrompt consists of two main parts:
* Final prompt: This is the final prompt that is returned
* Pipeline prompts: This is a list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
import { PromptTemplate, PipelinePromptTemplate,} from "@langchain/core/prompts";const fullPrompt = PromptTemplate.fromTemplate(`{introduction}{example}{start}`);const introductionPrompt = PromptTemplate.fromTemplate( `You are impersonating {person}.`);const examplePrompt = PromptTemplate.fromTemplate(`Here's an example of an interaction:Q: {example_q}A: {example_a}`);const startPrompt = PromptTemplate.fromTemplate(`Now, do this for real!Q: {input}A:`);const composedPrompt = new PipelinePromptTemplate({ pipelinePrompts: [ { name: "introduction", prompt: introductionPrompt, }, { name: "example", prompt: examplePrompt, }, { name: "start", prompt: startPrompt, }, ], finalPrompt: fullPrompt,});const formattedPrompt = await composedPrompt.format({ person: "Elon Musk", example_q: `What's your favorite car?`, example_a: "Telsa", input: `What's your favorite social media site?`,});console.log(formattedPrompt);/* You are impersonating Elon Musk. Here's an example of an interaction: Q: What's your favorite car? A: Telsa Now, do this for real! Q: What's your favorite social media site? A:*/
#### API Reference:
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [PipelinePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PipelinePromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Partial prompt templates
](/v0.1/docs/modules/model_io/prompts/partial/)[
Next
LLMs
](/v0.1/docs/modules/model_io/llms/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/chatbots/retrieval/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Quickstart](/v0.1/docs/use_cases/chatbots/quickstart/)
* [Memory management](/v0.1/docs/use_cases/chatbots/memory_management/)
* [Retrieval](/v0.1/docs/use_cases/chatbots/retrieval/)
* [Tool usage](/v0.1/docs/use_cases/chatbots/tool_usage/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* Retrieval
On this page
Retrieval
=========
Retrieval is a common technique chatbots use to augment their responses with data outside a chat modelβs training data. This section will cover how to implement retrieval in the context of chatbots, but itβs worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/v0.1/docs/use_cases/question_answering/) that go into greater depth!
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Youβll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:
* npm
* Yarn
* pnpm
npm install @langchain/openai cheerio
yarn add @langchain/openai cheerio
pnpm add @langchain/openai cheerio
Letβs also set up a chat model that weβll use for the below examples.
import { ChatOpenAI } from "@langchain/openai";const chat = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0.2,});
Creating a retriever[β](#creating-a-retriever "Direct link to Creating a retriever")
------------------------------------------------------------------------------------
Weβll use [the LangSmith documentation](https://docs.smith.langchain.com) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/v0.1/docs/use_cases/question_answering/).
Letβs use a document loader to pull text from the docs:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://docs.smith.langchain.com/user_guide");const rawDocs = await loader.load();
Next, we split it into smaller chunks that the LLMβs context window can handle and store it in a vector database:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 0,});const allSplits = await textSplitter.splitDocuments(rawDocs);
Then we embed and store those chunks in a vector database:
import { OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";const vectorstore = await MemoryVectorStore.fromDocuments( allSplits, new OpenAIEmbeddings());
And finally, letβs create a retriever from our initialized vectorstore:
const retriever = vectorstore.asRetriever(4);const docs = await retriever.invoke("how can langsmith help with testing?");console.log(docs);
[ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by defaultβAt LangChain, all of us have LangSmithβs tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }]
We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions. And now weβve got a retriever that can return related data from the LangSmith docs!
Document chains[β](#document-chains "Direct link to Document chains")
---------------------------------------------------------------------
Now that we have a retriever that can return LangChain docs, letβs create a chain that can use them as context to answer questions. Weβll use a `createStuffDocumentsChain` helper function to "stuff" all of the input documents into the prompt. It will also handle formatting the docs as strings.
In addition to a chat model, the function also expects a prompt that has a `context` variable, as well as a placeholder for chat history messages named `messages`. Weβll create an appropriate prompt and pass it as shown below:
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const SYSTEM_TEMPLATE = `Answer the user's questions based on the below context. If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":<context>{context}</context>`;const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], new MessagesPlaceholder("messages"),]);const documentChain = await createStuffDocumentsChain({ llm: chat, prompt: questionAnsweringPrompt,});
We can invoke this `documentChain` by itself to answer questions. Letβs use the docs we retrieved above and the same question, `how can langsmith help with testing?`:
import { HumanMessage, AIMessage } from "@langchain/core/messages";await documentChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], context: docs,});
Yes, LangSmith can help test LLM applications by providing tracing and monitoring capabilities. It can help in debugging bugs in formatting logic, unexpected transformations to user input, and missing user input. Additionally, it can also assist in understanding the exact output of an LLM, which can help determine if there is a need for different parsing.
Looks good! For comparison, we can try it with no context docs and compare the result:
await documentChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")], context: [],});
I don't know about LangSmith's specific capabilities for testing LLM applications. It's best to directly inquire with LangSmith or check their website for information on their services related to testing LLM applications.
We can see that the LLM does not return any results.
Retrieval chains[β](#retrieval-chains "Direct link to Retrieval chains")
------------------------------------------------------------------------
Letβs combine this document chain with the retriever. Hereβs one way this can look:
import type { BaseMessage } from "@langchain/core/messages";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const parseRetrieverInput = (params: { messages: BaseMessage[] }) => { return params.messages[params.messages.length - 1].content;};const retrievalChain = RunnablePassthrough.assign({ context: RunnableSequence.from([parseRetrieverInput, retriever]),}).assign({ answer: documentChain,});
Given a list of input messages, we extract the content of the last message in the list and pass that to the retriever to fetch some documents. Then, we pass those documents as context to our document chain to generate a final response.
Invoking this chain combines both steps outlined above:
await retrievalChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],});
{ messages: [ HumanMessage { content: 'Can LangSmith help test my LLM applications?' } ], context: [ Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: [Object] }, Document { pageContent: 'many tokens an agent usedDebuggingβDebugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?βLLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string β string (or chat messages β chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are', metadata: [Object] }, Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata: [Object] } ], answer: 'Yes, LangSmith can help test LLM applications by providing assistance in debugging LLMs, chains, and agents. It helps in understanding the exact input to the LLM, monitoring the application for issues, and manually reviewing and annotating runs for testing and evaluation purposes.'}
Looks good!
Query transformation[β](#query-transformation "Direct link to Query transformation")
------------------------------------------------------------------------------------
Our retrieval chain is capable of answering questions about LangSmith, but thereβs a problem - chatbots interact with users conversationally, and therefore have to deal with followup questions.
The chain in its current form will struggle with this. Consider a followup question to our original question like `Tell me more!`. If we invoke our retriever with that query directly, we get documents irrelevant to LLM application testing:
await retriever.invoke("Tell me more!");
[ Document { pageContent: 'shadowRing,', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'Pro,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;--joy-fontFamily-fallback:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'y-fontSize-xl7:4.5rem;--joy-fontFamily-body:"Public', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }, Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: { source: 'https://docs.smith.langchain.com/user_guide', loc: [Object] } }]
This is because the retriever has no innate concept of state, and will only pull documents most similar to the query given. To solve this, we can transform the query into a standalone query without any external references an LLM.
Hereβs an example:
const queryTransformPrompt = ChatPromptTemplate.fromMessages([ new MessagesPlaceholder("messages"), [ "user", "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.", ],]);const queryTransformationChain = queryTransformPrompt.pipe(chat);await queryTransformationChain.invoke({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ],});
AIMessage { content: '"LangSmith LLM application testing and evaluation"'}
Awesome! That transformed query would pull up context documents related to LLM application testing.
Letβs add this to our retrieval chain. We can wrap our retriever as follows:
import { RunnableBranch } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";const queryTransformingRetrieverChain = RunnableBranch.from([ [ (params: { messages: BaseMessage[] }) => params.messages.length === 1, RunnableSequence.from([parseRetrieverInput, retriever]), ], queryTransformPrompt .pipe(chat) .pipe(new StringOutputParser()) .pipe(retriever),]).withConfig({ runName: "chat_retriever_chain" });
Then, we can use this query transformation chain to make our retrieval chain better able to handle such followup questions:
const conversationalRetrievalChain = RunnablePassthrough.assign({ context: queryTransformingRetrieverChain,}).assign({ answer: documentChain,});
Awesome! Letβs invoke this new chain with the same inputs as earlier:
await conversationalRetrievalChain.invoke({ messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],});
{ messages: [ HumanMessage { content: 'Can LangSmith help test my LLM applications?' } ], context: [ Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: [Object] }, Document { pageContent: 'many tokens an agent usedDebuggingβDebugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?βLLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string β string (or chat messages β chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are', metadata: [Object] }, Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata: [Object] } ], answer: 'Yes, LangSmith can help test LLM applications by providing the ability to monitor the application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Additionally, it allows for manual review and annotation of runs, which can be useful for assessing subjective qualities that automatic evaluators struggle with.'}
await conversationalRetrievalChain.invoke({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ],});
{ messages: [ HumanMessage { content: 'Can LangSmith help test my LLM applications?' }, AIMessage { content: 'Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.' }, HumanMessage { content: 'Tell me more!' } ], context: [ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata: [Object] }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: [Object] }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: [Object] } ], answer: 'LangSmith simplifies the process of manually reviewing and annotating runs through annotation queues. These queues allow you to select runs based on criteria like model type or automatic evaluation scores and queue them up for human review. As a reviewer, you can quickly step through the runs, view the input, output, and any existing tags before adding your own feedback. This can be particularly useful for assessing subjective qualities that automatic evaluators struggle with, as well as for testing changes and debugging issues related to formatting logic, unexpected transformations to user input, and missing user input. Additionally, LangSmith can help in understanding the exact output of an LLM, which may contain structured data intended to be parsed into a structured representation.'}
You can check out [this LangSmith trace](https://smith.langchain.com/public/9e161ad1-8d08-49a3-bfef-0b51cfbce029/r) to see the internal query transformation step for yourself.
Streaming[β](#streaming "Direct link to Streaming")
---------------------------------------------------
Because this chain is constructed with LCEL, you can use familiar methods like `.stream()` with it:
const stream = await conversationalRetrievalChain.stream({ messages: [ new HumanMessage("Can LangSmith help test my LLM applications?"), new AIMessage( "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise." ), new HumanMessage("Tell me more!"), ],});for await (const chunk of stream) { console.log(chunk);}
{ messages: [ HumanMessage { content: 'Can LangSmith help test my LLM applications?' }, AIMessage { content: 'Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.' }, HumanMessage { content: 'Tell me more!' } ]}{ context: [ Document { pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.MonitoringβAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata: [Object] }, Document { pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata: [Object] }, Document { pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' + 'well and we want to be more rigorous about testing changes. We can use a dataset\n' + 'that weβve constructed along the way (see above). Alternatively, we could spend some\n' + 'time constructing a small dataset by hand. For these situations, LangSmith simplifies', metadata: [Object] }, Document { pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a', metadata: [Object] } ]}{ answer: '' }{ answer: 'Lang' }{ answer: 'Smith' }{ answer: ' simpl' }{ answer: 'ifies' }{ answer: ' the' }{ answer: ' process' }{ answer: ' of' }{ answer: ' manually' }{ answer: ' reviewing' }{ answer: ' and' }{ answer: ' annot' }{ answer: 'ating' }{ answer: ' runs' }{ answer: ' through' }{ answer: ' annotation' }{ answer: ' queues' }{ answer: '.' }{ answer: ' These' }{ answer: ' queues' }{ answer: ' allow' }{ answer: ' you' }{ answer: ' to' }{ answer: ' select' }{ answer: ' runs' }{ answer: ' based' }{ answer: ' on' }{ answer: ' criteria' }{ answer: ' like' }{ answer: ' model' }{ answer: ' type' }{ answer: ' or' }{ answer: ' automatic' }{ answer: ' evaluation' }{ answer: ' scores' }{ answer: ' and' }{ answer: ' queue' }{ answer: ' them' }{ answer: ' up' }{ answer: ' for' }{ answer: ' human' }{ answer: ' review' }{ answer: '.' }{ answer: ' As' }{ answer: ' a' }{ answer: ' reviewer' }{ answer: ',' }{ answer: ' you' }{ answer: ' can' }{ answer: ' quickly' }{ answer: ' step' }{ answer: ' through' }{ answer: ' the' }{ answer: ' runs' }{ answer: ',' }{ answer: ' view' }{ answer: ' the' }{ answer: ' input' }{ answer: ',' }{ answer: ' output' }{ answer: ',' }{ answer: ' and' }{ answer: ' any' }{ answer: ' existing' }{ answer: ' tags' }{ answer: ' before' }{ answer: ' adding' }{ answer: ' your' }{ answer: ' own' }{ answer: ' feedback' }{ answer: '.' }{ answer: ' This' }{ answer: ' can' }{ answer: ' be' }{ answer: ' particularly' }{ answer: ' useful' }{ answer: ' for' }{ answer: ' assessing' }{ answer: ' subjective' }{ answer: ' qualities' }{ answer: ' that' }{ answer: ' automatic' }{ answer: ' evalu' }{ answer: 'ators' }{ answer: ' struggle' }{ answer: ' with' }{ answer: ' and' }{ answer: ' for' }{ answer: ' rigor' }{ answer: 'ously' }{ answer: ' testing' }{ answer: ' changes' }{ answer: ' in' }{ answer: ' your' }{ answer: ' application' }{ answer: '.' }{ answer: ' Additionally' }{ answer: ',' }{ answer: ' Lang' }{ answer: 'Smith' }{ answer: ' can' }{ answer: ' help' }{ answer: ' debug' }{ answer: ' bugs' }{ answer: ' in' }{ answer: ' formatting' }{ answer: ' logic' }{ answer: ',' }{ answer: ' unexpected' }{ answer: ' transformations' }{ answer: ' to' }{ answer: ' user' }{ answer: ' input' }{ answer: ',' }{ answer: ' and' }{ answer: ' missing' }{ answer: ' user' }{ answer: ' input' }{ answer: '.' }{ answer: ' It' }{ answer: ' can' }{ answer: ' also' }{ answer: ' assist' }{ answer: ' in' }{ answer: ' understanding' }{ answer: ' the' }{ answer: ' output' }{ answer: ' of' }{ answer: ' an' }{ answer: ' L' }{ answer: 'LM' }{ answer: ',' }{ answer: ' which' }{ answer: ' may' }{ answer: ' contain' }{ answer: ' structured' }{ answer: ' data' }{ answer: ' that' }{ answer: ' needs' }{ answer: ' to' }{ answer: ' be' }{ answer: ' parsed' }{ answer: ' into' }{ answer: ' a' }{ answer: ' structured' }{ answer: ' representation' }{ answer: '.' }{ answer: '' }
Further reading[β](#further-reading "Direct link to Further reading")
---------------------------------------------------------------------
This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out [this section](/v0.1/docs/modules/data_connection/) of the docs.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Memory management
](/v0.1/docs/use_cases/chatbots/memory_management/)[
Next
Tool usage
](/v0.1/docs/use_cases/chatbots/tool_usage/)
* [Setup](#setup)
* [Creating a retriever](#creating-a-retriever)
* [Document chains](#document-chains)
* [Retrieval chains](#retrieval-chains)
* [Query transformation](#query-transformation)
* [Streaming](#streaming)
* [Further reading](#further-reading)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/chatbots/tool_usage/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Quickstart](/v0.1/docs/use_cases/chatbots/quickstart/)
* [Memory management](/v0.1/docs/use_cases/chatbots/memory_management/)
* [Retrieval](/v0.1/docs/use_cases/chatbots/retrieval/)
* [Tool usage](/v0.1/docs/use_cases/chatbots/tool_usage/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* Tool usage
On this page
Tool usage
==========
This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.
Before reading this guide, we recommend you read both [the chatbot quickstart](/v0.1/docs/use_cases/chatbots/quickstart/) in this section and be familiar with [the documentation on agents](/v0.1/docs/modules/agents/).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
For this guide, weβll be using [an OpenAI tools agent](/v0.1/docs/modules/agents/agent_types/openai_tools_agent/) with a single tool for searching the web. The default will be powered by [Tavily](/v0.1/docs/integrations/tools/tavily_search/), but you can switch it out for any similar tool. The rest of this section will assume youβre using Tavily.
Youβll need to [sign up for an account on the Tavily website](https://tavily.com), and install the following packages:
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
You will also need your OpenAI key set as `OPENAI_API_KEY` and your Tavily API key set as `TAVILY_API_KEY`.
Creating an agent[β](#creating-an-agent "Direct link to Creating an agent")
---------------------------------------------------------------------------
Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed.
First, letβs initialize Tavily and an OpenAI chat model capable of tool calling:
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";import { ChatOpenAI } from "@langchain/openai";const tools = [ new TavilySearchResults({ maxResults: 1, }),];const chat = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});
To make our agent conversational, we must also choose a prompt with a placeholder for our chat history. Hereβs an example:
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ], new MessagesPlaceholder("messages"), new MessagesPlaceholder("agent_scratchpad"),]);
Great! Now letβs assemble our agent:
import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";const agent = await createOpenAIToolsAgent({ llm: chat, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools });
Running the agent[β](#running-the-agent "Direct link to Running the agent")
---------------------------------------------------------------------------
Now that weβve set up our agent, letβs try interacting with it! It can handle both trivial queries that require no lookup:
import { HumanMessage } from "@langchain/core/messages";await agentExecutor.invoke({ messages: [new HumanMessage("I'm Nemo!")],});
> Entering new AgentExecutor chain...Hi Nemo! It's great to meet you. How can I assist you today?> Finished chain.
{ messages: [ HumanMessage { content: "I'm Nemo!" } ], output: "Hi Nemo! It's great to meet you. How can I assist you today?"}
Or, it can use of the passed search tool to get up to date information if needed:
await agentExecutor.invoke({ messages: [ new HumanMessage( "What is the current conservation status of the Great Barrier Reef?" ), ],});
> Entering new AgentExecutor chain...Invoking: `tavily_search_results_json` with `{'query': 'current conservation status of the Great Barrier Reef'}`[{"title":"The Great Barrier Reef has avoided an 'in danger' listing, but still ...","url":"https://www.abc.net.au/news/2023-09-14/great-barrier-reef-off-in-danger-list-analysis/102854968","content":"Posted Wed 13 Sep 2023 at 10:32pm Coral reefs, including the Great Barrier Reef, are facing a bleak future. (Supplied: The Ocean Agency / XL Catlin Seaview Survey) abc.net.au/news/great-barrier-reef-off-in-danger-list-analysis/102854968 It's official.","score":0.96777,"raw_content":null}]> Finished chain.
{ messages: [ HumanMessage { content: 'What is the current conservation status of the Great Barrier Reef?' } ], output: 'The Great Barrier Reef has avoided an "in danger" listing, but it is still facing a bleak future in terms of conservation. You can find more information about this on the ABC News website: [Great Barrier Reef Conservation Status](https://www.abc.net.au/news/2023-09-14/great-barrier-reef-off-in-danger-list-analysis/102854968)'}
Conversational responses[β](#conversational-responses "Direct link to Conversational responses")
------------------------------------------------------------------------------------------------
Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot:
import { AIMessage } from "@langchain/core/messages";await agentExecutor.invoke({ messages: [ new HumanMessage("I'm Nemo!"), new AIMessage("Hello Nemo! How can I assist you today?"), new HumanMessage("What is my name?"), ],});
> Entering new AgentExecutor chain...Your name is Nemo!> Finished chain.
{ messages: [ HumanMessage { content: "I'm Nemo!" }, AIMessage { content: 'Hello Nemo! How can I assist you today?' }, HumanMessage { content: 'What is my name?' } ], output: 'Your name is Nemo!'}
If preferred, you can also wrap the agent executor in a `RunnableWithMessageHistory` class to internally manage history messages. First, we need to slightly modify the prompt to take a separate input variable so that the wrapper can parse which input value to store as history:
// Adapted from https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt2 = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const agent2 = await createOpenAIToolsAgent({ llm: chat, tools, prompt: prompt2,});const agentExecutor2 = new AgentExecutor({ agent: agent2, tools });
Then, because our agent executor has multiple outputs, we also have to set the `outputMessagesKey` property when initializing the wrapper:
import { ChatMessageHistory } from "langchain/stores/message/in_memory";import { RunnableWithMessageHistory } from "@langchain/core/runnables";const demoEphemeralChatMessageHistory = new ChatMessageHistory();const conversationalAgentExecutor = new RunnableWithMessageHistory({ runnable: agentExecutor2, getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory, inputMessagesKey: "input", outputMessagesKey: "output", historyMessagesKey: "chat_history",});
await conversationalAgentExecutor.invoke( { input: "I'm Nemo!" }, { configurable: { sessionId: "unused" } });
> Entering new AgentExecutor chain...Hello Nemo! It's great to meet you. How can I assist you today?> Finished chain.
{ input: "I'm Nemo!", chat_history: [ HumanMessage { content: "I'm Nemo!" }, AIMessage { content: "Hello Nemo! It's great to meet you. How can I assist you today?" } ], output: "Hello Nemo! It's great to meet you. How can I assist you today?"}
await conversationalAgentExecutor.invoke( { input: "What is my name?" }, { configurable: { sessionId: "unused" } });
> Entering new AgentExecutor chain...Your name is Nemo!> Finished chain.
{ input: 'What is my name?', chat_history: [ HumanMessage { content: "I'm Nemo!" }, AIMessage { content: "Hello Nemo! It's great to meet you. How can I assist you today?" }, HumanMessage { content: 'What is my name?' }, AIMessage { content: 'Your name is Nemo!' } ], output: 'Your name is Nemo!'}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Retrieval
](/v0.1/docs/use_cases/chatbots/retrieval/)[
Next
Extraction
](/v0.1/docs/use_cases/extraction/)
* [Setup](#setup)
* [Creating an agent](#creating-an-agent)
* [Running the agent](#running-the-agent)
* [Conversational responses](#conversational-responses)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/sql/large_db/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Quickstart](/v0.1/docs/use_cases/sql/quickstart/)
* [Agents](/v0.1/docs/use_cases/sql/agents/)
* [Prompting strategies](/v0.1/docs/use_cases/sql/prompting/)
* [Query validation](/v0.1/docs/use_cases/sql/query_checking/)
* [Large databases](/v0.1/docs/use_cases/sql/large_db/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* Large databases
On this page
Large databases
===============
In order to write valid queries against a database, we need to feed the model the table names, table schemas, and feature values for it to query over. When there are many tables, columns, and/or high-cardinality columns, it becomes impossible for us to dump the full information about our database in every prompt. Instead, we must find ways to dynamically insert into the prompt only the most relevant information. Let's take a look at some techniques for doing this.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, install the required packages and set your environment variables. This example will use OpenAI as the LLM.
npm install langchain @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Many tables[β](#many-tables "Direct link to Many tables")
---------------------------------------------------------
One of the main pieces of information we need to include in our prompt is the schemas of the relevant tables. When we have very many tables, we can't fit all of the schemas in a single prompt. What we can do in such cases is first extract the names of the tables related to the user input, and then include only their schemas.
One easy and reliable way to do this is using OpenAI function-calling and Zod models. LangChain comes with a built-in `createExtractionChainZod` chain that lets us do just this:
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { z } from "zod";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const Table = z.object({ names: z.array(z.string()).describe("Names of tables in SQL database"),});const tableNames = db.allTables.map((t) => t.tableName).join("\n");const system = `Return the names of ALL the SQL tables that MIGHT be relevant to the user question.The tables are:${tableNames}Remember to include ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{input}"],]);const tableChain = prompt.pipe(llm.withStructuredOutput(Table));console.log( await tableChain.invoke({ input: "What are all the genres of Alanis Morisette songs?", }));/**{ names: [ 'Artist', 'Track', 'Genre' ] } */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/5ca0c91e-4a40-44ef-8c45-9a4247dc474c/r// -------------/**This works pretty well! Except, as weβll see below, we actually need a few other tables as well.This would be pretty difficult for the model to know based just on the user question.In this case, we might think to simplify our modelβs job by grouping the tables together.Weβll just ask the model to choose between categories βMusicβ and βBusinessβ, and then take care of selecting all the relevant tables from there: */const prompt2 = ChatPromptTemplate.fromMessages([ [ "system", `Return the names of the SQL tables that are relevant to the user question. The tables are: Music Business`, ], ["human", "{input}"],]);const categoryChain = prompt2.pipe(llm.withStructuredOutput(Table));console.log( await categoryChain.invoke({ input: "What are all the genres of Alanis Morisette songs?", }));/**{ names: [ 'Music' ] } */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/12b62e78-bfbe-42ff-86f2-ad738a476554/r// -------------const getTables = (categories: z.infer<typeof Table>): Array<string> => { let tables: Array<string> = []; for (const category of categories.names) { if (category === "Music") { tables = tables.concat([ "Album", "Artist", "Genre", "MediaType", "Playlist", "PlaylistTrack", "Track", ]); } else if (category === "Business") { tables = tables.concat([ "Customer", "Employee", "Invoice", "InvoiceLine", ]); } } return tables;};const tableChain2 = categoryChain.pipe(getTables);console.log( await tableChain2.invoke({ input: "What are all the genres of Alanis Morisette songs?", }));/**[ 'Album', 'Artist', 'Genre', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/e78c10aa-e923-4a24-b0c8-f7a6f5d316ce/r// -------------// Now that weβve got a chain that can output the relevant tables for any query we can combine this with our createSqlQueryChain, which can accept a list of tableNamesToUse to determine which table schemas are included in the prompt:const queryChain = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const tableChain3 = RunnableSequence.from([ { input: (i: { question: string }) => i.question, }, tableChain2,]);const fullChain = RunnablePassthrough.assign({ tableNamesToUse: tableChain3,}).pipe(queryChain);const query = await fullChain.invoke({ question: "What are all the genres of Alanis Morisette songs?",});console.log(query);/**SELECT DISTINCT "Genre"."Name"FROM "Genre"JOIN "Track" ON "Genre"."GenreId" = "Track"."GenreId"JOIN "Album" ON "Track"."AlbumId" = "Album"."AlbumId"JOIN "Artist" ON "Album"."ArtistId" = "Artist"."ArtistId"WHERE "Artist"."Name" = 'Alanis Morissette'LIMIT 5; */console.log(await db.run(query));/**[{"Name":"Rock"}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/c7d576d0-3462-40db-9edc-5492f10555bf/r// -------------// We might rephrase our question slightly to remove redundancy in the answerconst query2 = await fullChain.invoke({ question: "What is the set of all unique genres of Alanis Morisette songs?",});console.log(query2);/**SELECT DISTINCT Genre.Name FROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Alanis Morissette' */console.log(await db.run(query2));/**[{"Name":"Rock"}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/6e80087d-e930-4f22-9b40-f7edb95a2145/r// -------------
#### API Reference:
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [RunnablePassthrough](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
We've seen how to dynamically include a subset of table schemas in a prompt within a chain. Another possible approach to this problem is to let an Agent decide for itself when to look up tables by giving it a Tool to do so. You can see an example of this in the [SQL: Agents](/v0.1/docs/use_cases/sql/agents/) guide.
High-cardinality columns[β](#high-cardinality-columns "Direct link to High-cardinality columns")
------------------------------------------------------------------------------------------------
High-cardinality refers to columns in a database that have a vast range of unique values. These columns are characterized by a high level of uniqueness in their data entries, such as individual names, addresses, or product serial numbers. High-cardinality data can pose challenges for indexing and querying, as it requires more sophisticated strategies to efficiently filter and retrieve specific entries.
In order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly.
One naive strategy it to create a vector store with all the distinct proper nouns that exist in the database. We can then query that vector store each user input and inject the most relevant proper nouns into the prompt.
First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:
import { DocumentInterface } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});async function queryAsList(database: any, query: string): Promise<string[]> { const res: Array<{ [key: string]: string }> = JSON.parse( await database.run(query) ) .flat() .filter((el: any) => el != null); const justValues: Array<string> = res.map((item) => Object.values(item)[0] .replace(/\b\d+\b/g, "") .trim() ); return justValues;}let properNouns: string[] = await queryAsList(db, "SELECT Name FROM Artist");properNouns = properNouns.concat( await queryAsList(db, "SELECT Title FROM Album"));properNouns = properNouns.concat( await queryAsList(db, "SELECT Name FROM Genre"));console.log(properNouns.length);/**647 */console.log(properNouns.slice(0, 5));/**[ 'AC/DC', 'Accept', 'Aerosmith', 'Alanis Morissette', 'Alice In Chains'] */// Now we can embed and store all of our values in a vector database:const vectorDb = await MemoryVectorStore.fromTexts( properNouns, {}, new OpenAIEmbeddings());const retriever = vectorDb.asRetriever(15);// And put together a query construction chain that first retrieves values from the database and inserts them into the prompt:const system = `You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than {top_k} rows.Here is the relevant table info: {table_info}Here is a non-exhaustive list of possible feature values.If filtering on a feature value make sure to check its spelling against this list first:{proper_nouns}`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{input}"],]);const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const queryChain = await createSqlQueryChain({ llm, db, prompt, dialect: "sqlite",});const retrieverChain = RunnableSequence.from([ (i: { question: string }) => i.question, retriever, (docs: Array<DocumentInterface>) => docs.map((doc) => doc.pageContent).join("\n"),]);const chain = RunnablePassthrough.assign({ proper_nouns: retrieverChain,}).pipe(queryChain);// To try out our chain, letβs see what happens when we try filtering on βelenis morisetβ, a misspelling of Alanis Morissette, without and with retrieval:// Without retrievalconst query = await queryChain.invoke({ question: "What are all the genres of Elenis Moriset songs?", proper_nouns: "",});console.log("query", query);/**query SELECT DISTINCT Genre.NameFROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Elenis Moriset'LIMIT 5; */console.log("db query results", await db.run(query));/**db query results [] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/b153cb9b-6fbb-43a8-b2ba-4c86715183b9/r// -------------// With retrieval:const query2 = await chain.invoke({ question: "What are all the genres of Elenis Moriset songs?",});console.log("query2", query2);/**query2 SELECT DISTINCT Genre.NameFROM GenreJOIN Track ON Genre.GenreId = Track.GenreIdJOIN Album ON Track.AlbumId = Album.AlbumIdJOIN Artist ON Album.ArtistId = Artist.ArtistIdWHERE Artist.Name = 'Alanis Morissette'; */console.log("db query results", await db.run(query2));/**db query results [{"Name":"Rock"}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/2f4f0e37-3b7f-47b5-837c-e2952489cac0/r// -------------
#### API Reference:
* [DocumentInterface](https://api.js.langchain.com/interfaces/langchain_core_documents.DocumentInterface.html) from `@langchain/core/documents`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [RunnablePassthrough](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
We can see that with retrieval we're able to correct the spelling and get back a valid result.
Another possible approach to this problem is to let an Agent decide for itself when to look up proper nouns. You can see an example of this in the [SQL: Agents](/v0.1/docs/use_cases/sql/agents/) guide.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Query validation
](/v0.1/docs/use_cases/sql/query_checking/)[
Next
Chatbots
](/v0.1/docs/use_cases/chatbots/)
* [Setup](#setup)
* [Many tables](#many-tables)
* [High-cardinality columns](#high-cardinality-columns)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/sql/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Quickstart](/v0.1/docs/use_cases/sql/quickstart/)
* [Agents](/v0.1/docs/use_cases/sql/agents/)
* [Prompting strategies](/v0.1/docs/use_cases/sql/prompting/)
* [Query validation](/v0.1/docs/use_cases/sql/query_checking/)
* [Large databases](/v0.1/docs/use_cases/sql/large_db/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* Quickstart
On this page
Quickstart
==========
In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question.
β οΈ Security note β οΈ[β](#οΈ-security-note-οΈ "Direct link to β οΈ Security note β οΈ")
-------------------------------------------------------------------------------
Building Q&A systems of SQL databases can require executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agent's needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, see [here](/v0.1/docs/security/).
Architecture[β](#architecture "Direct link to Architecture")
------------------------------------------------------------
At a high-level, the steps of most SQL chain and agent are:
1. **Convert question to SQL query**: Model converts user input to a SQL query.
2. **Execute SQL query**: Execute the SQL query
3. **Answer the question**: Model responds to user input using the query results.
![SQL Use Case Diagram](/v0.1/assets/images/sql_usecase-d432701261f05ab69b38576093718cf3.png)
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm i langchain @langchain/community @langchain/openai
yarn add langchain @langchain/community @langchain/openai
pnpm add langchain @langchain/community @langchain/openai
We default to OpenAI models in this guide.
export OPENAI_API_KEY=<your key># Uncomment the below to use LangSmith. Not required, but recommended for debugging and observability.# export LANGCHAIN_API_KEY=<your key># export LANGCHAIN_TRACING_V2=true
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Great! We've got a SQL database that we can query. Now let's try hooking it up to an LLM.
Chain[β](#chain "Direct link to Chain")
---------------------------------------
Let's create a simple chain that takes a question, turns it into a SQL query, executes the query, and uses the result to answer the original question.
### Convert question to SQL query[β](#convert-question-to-sql-query "Direct link to Convert question to SQL query")
The first step in a SQL chain or agent is to take the user input and convert it to a SQL query. LangChain comes with a built-in chain for this: [`createSqlQueryChain`](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html)
import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const chain = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const response = await chain.invoke({ question: "How many employees are there?",});console.log("response", response);/**response SELECT COUNT(*) FROM "Employee" */console.log("db run result", await db.run(response));/**db run result [{"COUNT(*)":8}] */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
We can look at the [LangSmith trace](https://smith.langchain.com/public/6d8f0213-9f02-498e-aeb2-ec774e324e2c/r) to get a better understanding of what this chain is doing. We can also inspect the chain directly for its prompts. Looking at the prompt (below), we can see that it is:
* Dialect-specific. In this case it references SQLite explicitly.
* Has definitions for all the available tables.
* Has three examples rows for each table.
This technique is inspired by papers like [this](https://arxiv.org/pdf/2204.00498.pdf), which suggest showing examples rows and being explicit about tables improves performance. We can also inspect the full prompt via the LangSmith trace:
![Chain Prompt](/v0.1/assets/images/sql_quickstart_langsmith_prompt-e90559eddd490ceee277642d9e76b37b.png)
### Execute SQL query[β](#execute-sql-query "Direct link to Execute SQL query")
Now that we've generated a SQL query, we'll want to execute it. This is the most dangerous part of creating a SQL chain. Consider carefully if it is OK to run automated queries over your data. Minimize the database connection permissions as much as possible. Consider adding a human approval step to you chains before query execution (see below).
We can use the [`QuerySqlTool`](https://api.js.langchain.com/classes/langchain_tools_sql.QuerySqlTool.html) to easily add query execution to our chain:
import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { QuerySqlTool } from "langchain/tools/sql";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const executeQuery = new QuerySqlTool(db);const writeQuery = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const chain = writeQuery.pipe(executeQuery);console.log(await chain.invoke({ question: "How many employees are there" }));/**[{"COUNT(*)":8}] */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [QuerySqlTool](https://api.js.langchain.com/classes/langchain_tools_sql.QuerySqlTool.html) from `langchain/tools/sql`
tip
See a LangSmith trace of the chain above [here](https://smith.langchain.com/public/3cbcf6f2-a55b-4701-a2e3-9928e4747328/r).
### Answer the question[β](#answer-the-question "Direct link to Answer the question")
Now that we have a way to automatically generate and execute queries, we just need to combine the original question and SQL query result to generate a final answer. We can do this by passing question and result to the LLM once more:
import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { QuerySqlTool } from "langchain/tools/sql";import { PromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const executeQuery = new QuerySqlTool(db);const writeQuery = await createSqlQueryChain({ llm, db, dialect: "sqlite",});const answerPrompt = PromptTemplate.fromTemplate(`Given the following user question, corresponding SQL query, and SQL result, answer the user question.Question: {question}SQL Query: {query}SQL Result: {result}Answer: `);const answerChain = answerPrompt.pipe(llm).pipe(new StringOutputParser());const chain = RunnableSequence.from([ RunnablePassthrough.assign({ query: writeQuery }).assign({ result: (i: { query: string }) => executeQuery.invoke(i.query), }), answerChain,]);console.log(await chain.invoke({ question: "How many employees are there" }));/**There are 8 employees. */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [QuerySqlTool](https://api.js.langchain.com/classes/langchain_tools_sql.QuerySqlTool.html) from `langchain/tools/sql`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [RunnablePassthrough](https://api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
tip
See a LangSmith trace of the chain above [here](https://smith.langchain.com/public/d130ce1f-1fce-4192-921e-4b522884ec1a/r).
### Next steps[β](#next-steps "Direct link to Next steps")
For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:
* [Prompting strategies](/v0.1/docs/use_cases/sql/prompting/): Advanced prompt engineering techniques.
* [Query checking](/v0.1/docs/use_cases/sql/query_checking/): Add query validation and error handling.
* [Large databases](/v0.1/docs/use_cases/sql/large_db/): Techniques for working with large databases.
Agents[β](#agents "Direct link to Agents")
------------------------------------------
LangChain offers a number of tools and functions that allow you to create SQL Agents which can provide a more flexible way of interacting with SQL databases. The main advantages of using SQL Agents are:
* It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).
* It can recover from errors by running a generated query, catching the traceback and regenerating it correctly.
* It can answer questions that require multiple dependent queries.
* It will save tokens by only considering the schema from relevant tables.
* To initialize the agent, we use [`createOpenAIToolsAgent`](https://api.js.langchain.com/functions/langchain_agents.createOpenAIToolsAgent.html) function. This agent contains the [`SqlToolkit`](https://api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) which contains tools to:
* Create and execute queries
* Check query syntax
* Retrieve table descriptions
* β¦ and more
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
SQL
](/v0.1/docs/use_cases/sql/)[
Next
Agents
](/v0.1/docs/use_cases/sql/agents/)
* [β οΈ Security note β οΈ](#οΈ-security-note-οΈ)
* [Architecture](#architecture)
* [Setup](#setup)
* [Chain](#chain)
* [Convert question to SQL query](#convert-question-to-sql-query)
* [Execute SQL query](#execute-sql-query)
* [Answer the question](#answer-the-question)
* [Next steps](#next-steps)
* [Agents](#agents)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/sql/query_checking/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Quickstart](/v0.1/docs/use_cases/sql/quickstart/)
* [Agents](/v0.1/docs/use_cases/sql/agents/)
* [Prompting strategies](/v0.1/docs/use_cases/sql/prompting/)
* [Query validation](/v0.1/docs/use_cases/sql/query_checking/)
* [Large databases](/v0.1/docs/use_cases/sql/large_db/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* Query validation
On this page
Query validation
================
Perhaps the most error-prone part of any SQL chain or agent is writing valid and safe SQL queries. In this guide we'll go over some strategies for validating our queries and handling invalid queries.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
npm install langchain @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Query checker[β](#query-checker "Direct link to Query checker")
---------------------------------------------------------------
Perhaps the simplest strategy is to ask the model itself to check the original query for common mistakes. Suppose we have the following SQL query chain:
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate, PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { ChatOpenAI } from "@langchain/openai";import { createSqlQueryChain } from "langchain/chains/sql_db";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const chain = await createSqlQueryChain({ llm, db, dialect: "sqlite",});/** * And we want to validate its outputs. We can do so by extending the chain with a second prompt and model call: */const SYSTEM_PROMPT = `Double check the user's {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.Output the final SQL query only.`;const prompt = await ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT], ["human", "{query}"],]).partial({ dialect: "sqlite" });const validationChain = prompt.pipe(llm).pipe(new StringOutputParser());const fullChain = RunnableSequence.from([ { query: async (i: { question: string }) => chain.invoke(i), }, validationChain,]);const query = await fullChain.invoke({ question: "What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010",});console.log("query", query);/**query SELECT AVG("Total") FROM "Invoice" WHERE "CustomerId" IN (SELECT "CustomerId" FROM "Customer" WHERE "Country" = 'USA' AND "Fax" IS NULL) AND "InvoiceDate" BETWEEN '2003-01-01 00:00:00' AND '2009-12-31 23:59:59' */console.log("db query results", await db.run(query));/**db query results [{"AVG(\"Total\")":6.632999999999998}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/d1131395-8477-47cd-8f74-e0c5491ea956/r// -------------// The obvious downside of this approach is that we need to make two model calls instead of one to generate our query.// To get around this we can try to perform the query generation and query check in a single model invocation:const SYSTEM_PROMPT_2 = `You are a {dialect} expert. Given an input question, create a syntactically correct {dialect} query to run.Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}. You can order the results to return the most informative data in the database.Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Pay attention to use date('now') function to get the current date, if the question involves "today".Only use the following tables:{table_info}Write an initial draft of the query. Then double check the {dialect} query for common mistakes, including:- Using NOT IN with NULL values- Using UNION when UNION ALL should have been used- Using BETWEEN for exclusive ranges- Data type mismatch in predicates- Properly quoting identifiers- Using the correct number of arguments for functions- Casting to the correct data type- Using the proper columns for joinsUse format:First draft: <<FIRST_DRAFT_QUERY>>Final answer: <<FINAL_ANSWER_QUERY>>`;const prompt2 = await PromptTemplate.fromTemplate( `System: ${SYSTEM_PROMPT_2}Human: {input}`).partial({ dialect: "sqlite" });const parseFinalAnswer = (output: string): string => output.split("Final answer: ")[1];const chain2 = ( await createSqlQueryChain({ llm, db, prompt: prompt2, dialect: "sqlite", })).pipe(parseFinalAnswer);const query2 = await chain2.invoke({ question: "What's the average Invoice from an American customer whose Fax is missing since 2003 but before 2010",});console.log("query2", query2);/**query2 SELECT AVG("Total") FROM "Invoice" WHERE "CustomerId" IN (SELECT "CustomerId" FROM "Customer" WHERE "Country" = 'USA' AND "Fax" IS NULL) AND date("InvoiceDate") BETWEEN date('2003-01-01') AND date('2009-12-31') LIMIT 5 */console.log("db query results", await db.run(query2));/**db query results [{"AVG(\"Total\")":6.632999999999998}] */// -------------// You can see a LangSmith trace of the above chain here:// https://smith.langchain.com/public/e21d6146-eca9-4de6-a078-808fd09979ea/r// -------------
#### API Reference:
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Prompting strategies
](/v0.1/docs/use_cases/sql/prompting/)[
Next
Large databases
](/v0.1/docs/use_cases/sql/large_db/)
* [Setup](#setup)
* [Query checker](#query-checker)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/sql/agents/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Quickstart](/v0.1/docs/use_cases/sql/quickstart/)
* [Agents](/v0.1/docs/use_cases/sql/agents/)
* [Prompting strategies](/v0.1/docs/use_cases/sql/prompting/)
* [Query validation](/v0.1/docs/use_cases/sql/query_checking/)
* [Large databases](/v0.1/docs/use_cases/sql/large_db/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* Agents
On this page
Agents
======
LangChain offers a number of tools and functions that allow you to create SQL Agents which can provide a more flexible way of interacting with SQL databases. The main advantages of using SQL Agents are:
* It can answer questions based on the databases schema as well as on the databases content (like describing a specific table).
* It can recover from errors by running a generated query, catching the traceback and regenerating it correctly.
* It can query the database as many times as needed to answer the user question.
To initialize the agent we'll use the [`createOpenAIToolsAgent`](https://api.js.langchain.com/functions/langchain_agents.createOpenAIToolsAgent.html) function. This agent uses the [`SqlToolkit`](https://api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) which contains tools to:
* Create and execute queries
* Check query syntax
* Retrieve table descriptions
* β¦ and more
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, install the required packages and set your environment variables. This example will use OpenAI as the LLM.
npm install langchain @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Initializing the Agent[β](#initializing-the-agent "Direct link to Initializing the Agent")
------------------------------------------------------------------------------------------
We'll use an OpenAI chat model and an "openai-tools" agent, which will use OpenAI's function-calling API to drive the agent's tool selection and invocations.
As we can see, the agent will first choose which tables are relevant and then add the schema for those tables and a few sample rows to the prompt.
import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { createOpenAIToolsAgent, AgentExecutor } from "langchain/agents";import { SqlToolkit } from "langchain/agents/toolkits/sql";import { AIMessage } from "langchain/schema";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const sqlToolKit = new SqlToolkit(db, llm);const tools = sqlToolKit.getTools();const SQL_PREFIX = `You are an agent designed to interact with a SQL database.Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results using the LIMIT clause.You can order the results by a relevant column to return the most interesting examples in the database.Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.You have access to tools for interacting with the database.Only use the below tools.Only use the information returned by the below tools to construct your final answer.You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.If the question does not seem related to the database, just return "I don't know" as the answer.`;const SQL_SUFFIX = `Begin!Question: {input}Thought: I should look at the tables in the database to see what I can query.{agent_scratchpad}`;const prompt = ChatPromptTemplate.fromMessages([ ["system", SQL_PREFIX], HumanMessagePromptTemplate.fromTemplate("{input}"), new AIMessage(SQL_SUFFIX.replace("{agent_scratchpad}", "")), new MessagesPlaceholder("agent_scratchpad"),]);const newPrompt = await prompt.partial({ dialect: sqlToolKit.dialect, top_k: "10",});const runnableAgent = await createOpenAIToolsAgent({ llm, tools, prompt: newPrompt,});const agentExecutor = new AgentExecutor({ agent: runnableAgent, tools,});console.log( await agentExecutor.invoke({ input: "List the total sales per country. Which country's customers spent the most?", }));/** { input: "List the total sales per country. Which country's customers spent the most?", output: 'The total sales per country are as follows:\n' + '\n' + '1. USA: $523.06\n' + '2. Canada: $303.96\n' + '3. France: $195.10\n' + '4. Brazil: $190.10\n' + '5. Germany: $156.48\n' + '6. United Kingdom: $112.86\n' + '7. Czech Republic: $90.24\n' + '8. Portugal: $77.24\n' + '9. India: $75.26\n' + '10. Chile: $46.62\n' + '\n' + "To find out which country's customers spent the most, we can see that the customers from the USA spent the most with a total sales of $523.06."} */console.log( await agentExecutor.invoke({ input: "Describe the playlisttrack table", }));/** { input: 'Describe the playlisttrack table', output: 'The `PlaylistTrack` table has two columns: `PlaylistId` and `TrackId`. Both columns are of type INTEGER and are not nullable (NOT NULL).\n' + '\n' + 'Here are three sample rows from the `PlaylistTrack` table:\n' + '\n' + '| PlaylistId | TrackId |\n' + '|------------|---------|\n' + '| 1 | 3402 |\n' + '| 1 | 3389 |\n' + '| 1 | 3390 |\n' + '\n' + 'Please let me know if there is anything else I can help you with.'} */
#### API Reference:
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [HumanMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts`
* [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [createOpenAIToolsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIToolsAgent.html) from `langchain/agents`
* [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [SqlToolkit](https://api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) from `langchain/agents/toolkits/sql`
* AIMessage from `langchain/schema`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
tip
You can see the LangSmith traces from the example above [here](https://smith.langchain.com/public/8bdedd3f-a76a-4968-878f-ad7366540baa/r) and [here](https://smith.langchain.com/public/6b3f932a-3f37-4946-8db4-99cc826da7de/r)
Using a dynamic few-shot prompt[β](#using-a-dynamic-few-shot-prompt "Direct link to Using a dynamic few-shot prompt")
---------------------------------------------------------------------------------------------------------------------
To optimize agent performance, we can provide a custom prompt with domain-specific knowledge. In this case we'll create a few shot prompt with an example selector, that will dynamically build the few shot prompt based on the user input. This will help the model make better queries by inserting relevant queries in the prompt that the model can use as reference.
First we need some user input SQL query examples:
export const examples = [ { input: "List all artists.", query: "SELECT * FROM Artist;" }, { input: "Find all albums for the artist 'AC/DC'.", query: "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');", }, { input: "List all tracks in the 'Rock' genre.", query: "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');", }, { input: "Find the total duration of all tracks.", query: "SELECT SUM(Milliseconds) FROM Track;", }, { input: "List all customers from Canada.", query: "SELECT * FROM Customer WHERE Country = 'Canada';", }, { input: "How many tracks are there in the album with ID 5?", query: "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;", }, { input: "Find the total number of invoices.", query: "SELECT COUNT(*) FROM Invoice;", }, { input: "List all tracks that are longer than 5 minutes.", query: "SELECT * FROM Track WHERE Milliseconds > 300000;", }, { input: "Who are the top 5 customers by total purchase?", query: "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;", }, { input: "Which albums are from the year 2000?", query: "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';", }, { input: "How many employees are there", query: 'SELECT COUNT(*) FROM "Employee"', },];
#### API Reference:
Now we can create an example selector. This will take the actual user input and select some number of examples to add to our few-shot prompt. We'll use a SemanticSimilarityExampleSelector, which will perform a semantic search using the embeddings and vector store we configure to find the examples most similar to our input:
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { FewShotPromptTemplate, PromptTemplate, ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { SqlToolkit } from "langchain/agents/toolkits/sql";import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";import { examples } from "./examples.js";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( examples, new OpenAIEmbeddings(), HNSWLib, { k: 5, inputKeys: ["input"], });// Now we can create our FewShotPromptTemplate, which takes our example selector, an example prompt for formatting each example, and a string prefix and suffix to put before and after our formatted examples:const SYSTEM_PREFIX = `You are an agent designed to interact with a SQL database.Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.You can order the results by a relevant column to return the most interesting examples in the database.Never query for all the columns from a specific table, only ask for the relevant columns given the question.You have access to tools for interacting with the database.Only use the given tools. Only use the information returned by the tools to construct your final answer.You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.If the question does not seem related to the database, just return "I don't know" as the answer.Here are some examples of user inputs and their corresponding SQL queries:`;const fewShotPrompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt: PromptTemplate.fromTemplate( "User input: {input}\nSQL query: {query}" ), inputVariables: ["input", "dialect", "top_k"], prefix: SYSTEM_PREFIX, suffix: "",});// Since our underlying agent is an [OpenAI tools agent](https://js.langchain.com/docs/modules/agents/agent_types/openai_tools_agent), which uses// OpenAI function calling, our full prompt should be a chat prompt with a human message template and an agentScratchpad MessagesPlaceholder.// The few-shot prompt will be used for our system message:const fullPrompt = ChatPromptTemplate.fromMessages([ new SystemMessagePromptTemplate(fewShotPrompt), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);// And now we can create our agent with our custom prompt:const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const sqlToolKit = new SqlToolkit(db, llm);const tools = sqlToolKit.getTools();const newPrompt = await fullPrompt.partial({ dialect: sqlToolKit.dialect, top_k: "10",});const runnableAgent = await createOpenAIToolsAgent({ llm, tools, prompt: newPrompt,});const agentExecutor = new AgentExecutor({ agent: runnableAgent, tools,});console.log( await agentExecutor.invoke({ input: "How many artists are there?" }));/**{ input: 'How many artists are there?', output: 'There are 275 artists.'} */
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [SystemMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.SystemMessagePromptTemplate.html) from `@langchain/core/prompts`
* [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [SqlToolkit](https://api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) from `langchain/agents/toolkits/sql`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createOpenAIToolsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIToolsAgent.html) from `langchain/agents`
tip
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/18962b1f-e8d9-4928-9813-49031e421a0a/r)
Dealing with high-cardinality columns[β](#dealing-with-high-cardinality-columns "Direct link to Dealing with high-cardinality columns")
---------------------------------------------------------------------------------------------------------------------------------------
In order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly.
We can achieve this by creating a vector store with all the distinct proper nouns that exist in the database. We can then have the agent query that vector store each time the user includes a proper noun in their question, to find the correct spelling for that word. In this way, the agent can make sure it understands which entity the user is referring to before building the target query.
First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";import { SqlToolkit } from "langchain/agents/toolkits/sql";import { SqlDatabase } from "langchain/sql_db";import { Tool } from "langchain/tools";import { createRetrieverTool } from "langchain/tools/retriever";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});async function queryAsList(query: string): Promise<string[]> { const res: Array<{ [key: string]: string }> = JSON.parse(await db.run(query)) .flat() .filter((el: any) => el != null); const justValues: Array<string> = res.map((item) => Object.values(item)[0] .replace(/\b\d+\b/g, "") .trim() ); return justValues;}const artists = await queryAsList("SELECT Name FROM Artist");const albums = await queryAsList("SELECT Title FROM Album");console.log(albums.slice(0, 5));/**[ 'For Those About To Rock We Salute You', 'Balls to the Wall', 'Restless and Wild', 'Let There Be Rock', 'Big Ones'] */// Now we can proceed with creating the custom retriever tool and the final agent:const vectorDb = await MemoryVectorStore.fromTexts( artists, {}, new OpenAIEmbeddings());const retriever = vectorDb.asRetriever(15);const description = `Use to look up values to filter on.Input is an approximate spelling of the proper noun, output is valid proper nouns.Use the noun most similar to the search.`;const retrieverTool = createRetrieverTool(retriever, { description, name: "search_proper_nouns",}) as unknown as Tool;const system = `You are an agent designed to interact with a SQL database.Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.You can order the results by a relevant column to return the most interesting examples in the database.Never query for all the columns from a specific table, only ask for the relevant columns given the question.You have access to tools for interacting with the database.Only use the given tools. Only use the information returned by the tools to construct your final answer.You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.If you need to filter on a proper noun, you must ALWAYS first look up the filter value using the "search_proper_nouns" tool! You have access to the following tables: {table_names}If the question does not seem related to the database, just return "I don't know" as the answer.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const llm = new ChatOpenAI({ model: "gpt-4", temperature: 0 });const sqlToolKit = new SqlToolkit(db, llm);const newPrompt = await prompt.partial({ dialect: sqlToolKit.dialect, top_k: "10", table_names: db.allTables.map((t) => t.tableName).join(", "),});const tools = [...sqlToolKit.getTools(), retrieverTool];const runnableAgent = await createOpenAIToolsAgent({ llm, tools, prompt: newPrompt,});const agentExecutor = new AgentExecutor({ agent: runnableAgent, tools,});console.log( await agentExecutor.invoke({ input: "How many albums does alis in chain have?", }));/**{ input: 'How many albums does alis in chain have?', output: 'Alice In Chains has 1 album.'} */
#### API Reference:
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [AgentExecutor](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html) from `langchain/agents`
* [createOpenAIToolsAgent](https://api.js.langchain.com/functions/langchain_agents.createOpenAIToolsAgent.html) from `langchain/agents`
* [SqlToolkit](https://api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) from `langchain/agents/toolkits/sql`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* Tool from `langchain/tools`
* [createRetrieverTool](https://api.js.langchain.com/functions/langchain_tools_retriever.createRetrieverTool.html) from `langchain/tools/retriever`
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
tip
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/5b4e6f56-d252-4d3d-af74-638dc0d1d9cb/r)
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
To learn more about the built-in generic agent types as well as how to build custom agents, head to the [Agents Modules](/v0.1/docs/modules/agents/).
The built-in `AgentExecutor` runs a simple Agent action -> Tool call -> Agent action⦠loop. To build more complex agent runtimes, head to the [LangGraph section](/v0.1/docs/use_cases/sql/agents/docs/langgraph/).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/use_cases/sql/quickstart/)[
Next
Prompting strategies
](/v0.1/docs/use_cases/sql/prompting/)
* [Setup](#setup)
* [Initializing the Agent](#initializing-the-agent)
* [Using a dynamic few-shot prompt](#using-a-dynamic-few-shot-prompt)
* [Dealing with high-cardinality columns](#dealing-with-high-cardinality-columns)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/sql/prompting/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Quickstart](/v0.1/docs/use_cases/sql/quickstart/)
* [Agents](/v0.1/docs/use_cases/sql/agents/)
* [Prompting strategies](/v0.1/docs/use_cases/sql/prompting/)
* [Query validation](/v0.1/docs/use_cases/sql/query_checking/)
* [Large databases](/v0.1/docs/use_cases/sql/large_db/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* Prompting strategies
On this page
Prompting strategies
====================
In this guide we'll go over prompting strategies to improve SQL query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, install the required packages and set your environment variables. This example will use OpenAI as the LLM.
npm install langchain @langchain/community @langchain/openai typeorm sqlite3
export OPENAI_API_KEY="your api key"# Uncomment the below to use LangSmith. Not required.# export LANGCHAIN_API_KEY="your api key"# export LANGCHAIN_TRACING_V2=true
The below example will use a SQLite connection with Chinook database. Follow these [installation steps](https://database.guide/2-sample-databases-sqlite/) to create `Chinook.db` in the same directory as this notebook:
* Save [this](https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql) file as `Chinook_Sqlite.sql`
* Run sqlite3 `Chinook.db`
* Run `.read Chinook_Sqlite.sql`
* Test `SELECT * FROM Artist LIMIT 10;`
Now, `Chinhook.db` is in our directory and we can interface with it using the Typeorm-driven `SqlDatabase` class:
import { SqlDatabase } from "langchain/sql_db";import { DataSource } from "typeorm";const datasource = new DataSource({ type: "sqlite", database: "../../../../Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});console.log(db.allTables.map((t) => t.tableName));/**[ 'Album', 'Artist', 'Customer', 'Employee', 'Genre', 'Invoice', 'InvoiceLine', 'MediaType', 'Playlist', 'PlaylistTrack', 'Track'] */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
Dialect-specific prompting[β](#dialect-specific-prompting "Direct link to Dialect-specific prompting")
------------------------------------------------------------------------------------------------------
One of the simplest things we can do is make our prompt specific to the SQL dialect we're using. When using the built-in [`createSqlQueryChain`](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) and [`SqlDatabase`](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html), this is handled for you for any of the following dialects:
import { SQL_PROMPTS_MAP } from "langchain/chains/sql_db";console.log({ SQL_PROMPTS_MAP: Object.keys(SQL_PROMPTS_MAP) });/**{ SQL_PROMPTS_MAP: [ 'oracle', 'postgres', 'sqlite', 'mysql', 'mssql', 'sap hana' ]} */// For example, using our current DB we can see that weβll get a SQLite-specific prompt:console.log({ sqlite: SQL_PROMPTS_MAP.sqlite,});/**{ sqlite: PromptTemplate { inputVariables: [ 'dialect', 'table_info', 'input', 'top_k' ], template: 'You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.\n' + 'Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.\n' + 'Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.\n' + 'Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n' + '\n' + 'Use the following format:\n' + '\n' + 'Question: "Question here"\n' + 'SQLQuery: "SQL Query to run"\n' + 'SQLResult: "Result of the SQLQuery"\n' + 'Answer: "Final answer here"\n' + '\n' + 'Only use the following tables:\n' + '{table_info}\n' + '\n' + 'Question: {input}', }} */
#### API Reference:
* [SQL\_PROMPTS\_MAP](https://api.js.langchain.com/variables/langchain_chains_sql_db.SQL_PROMPTS_MAP.html) from `langchain/chains/sql_db`
Table definitions and example rows[β](#table-definitions-and-example-rows "Direct link to Table definitions and example rows")
------------------------------------------------------------------------------------------------------------------------------
In basically any SQL chain, we'll need to feed the model at least part of the database schema. Without this it won't be able to write valid queries. Our database comes with some convenience methods to give us the relevant context. Specifically, we can get the table names, their schemas, and a sample of rows from each table:
import { db } from "../db.js";const context = await db.getTableInfo();console.log(context);/** CREATE TABLE Album ( AlbumId INTEGER NOT NULL, Title NVARCHAR(160) NOT NULL, ArtistId INTEGER NOT NULL)SELECT * FROM "Album" LIMIT 3; AlbumId Title ArtistId 1 For Those About To Rock We Salute You 1 2 Balls to the Wall 2 3 Restless and Wild 2CREATE TABLE Artist ( ArtistId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "Artist" LIMIT 3; ArtistId Name 1 AC/DC 2 Accept 3 AerosmithCREATE TABLE Customer ( CustomerId INTEGER NOT NULL, FirstName NVARCHAR(40) NOT NULL, LastName NVARCHAR(20) NOT NULL, Company NVARCHAR(80), Address NVARCHAR(70), City NVARCHAR(40), State NVARCHAR(40), Country NVARCHAR(40), PostalCode NVARCHAR(10), Phone NVARCHAR(24), Fax NVARCHAR(24), Email NVARCHAR(60) NOT NULL, SupportRepId INTEGER)SELECT * FROM "Customer" LIMIT 3; CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId 1 LuΓs GonΓ§alves Embraer - Empresa Brasileira de AeronΓ‘utica S.A. Av. Brigadeiro Faria Lima,2170 SΓ£o JosΓ© dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 luisg@embraer.com.br 3 2 Leonie KΓΆhler null Theodor-Heuss-StraΓe 34 Stuttgart null Germany 70174 +49 0711 2842222 null leonekohler@surfeu.de 5 3 FranΓ§ois Tremblay null 1498 rue BΓ©langer MontrΓ©al QC Canada H2G 1A7 +1 (514) 721-4711 null ftremblay@gmail.com 3CREATE TABLE Employee ( EmployeeId INTEGER NOT NULL, LastName NVARCHAR(20) NOT NULL, FirstName NVARCHAR(20) NOT NULL, Title NVARCHAR(30), ReportsTo INTEGER, BirthDate DATETIME, HireDate DATETIME, Address NVARCHAR(70), City NVARCHAR(40), State NVARCHAR(40), Country NVARCHAR(40), PostalCode NVARCHAR(10), Phone NVARCHAR(24), Fax NVARCHAR(24), Email NVARCHAR(60))SELECT * FROM "Employee" LIMIT 3; EmployeeId LastName FirstName Title ReportsTo BirthDate HireDate Address City State Country PostalCode Phone Fax Email 1 Adams Andrew General Manager null 1962-02-18 00:00:00 2002-08-14 00:00:00 11120 Jasper Ave NW Edmonton AB Canada T5K 2N1 +1 (780) 428-9482 +1 (780) 428-3457 andrew@chinookcorp.com 2 Edwards Nancy Sales Manager 1 1958-12-08 00:00:00 2002-05-01 00:00:00 825 8 Ave SW Calgary AB Canada T2P 2T3 +1 (403) 262-3443 +1 (403) 262-3322 nancy@chinookcorp.com 3 Peacock Jane Sales Support Agent 2 1973-08-29 00:00:00 2002-04-01 00:00:00 1111 6 Ave SW Calgary AB Canada T2P 5M5 +1 (403) 262-3443 +1 (403) 262-6712 jane@chinookcorp.comCREATE TABLE Genre ( GenreId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "Genre" LIMIT 3; GenreId Name 1 Rock 2 Jazz 3 MetalCREATE TABLE Invoice ( InvoiceId INTEGER NOT NULL, CustomerId INTEGER NOT NULL, InvoiceDate DATETIME NOT NULL, BillingAddress NVARCHAR(70), BillingCity NVARCHAR(40), BillingState NVARCHAR(40), BillingCountry NVARCHAR(40), BillingPostalCode NVARCHAR(10), Total NUMERIC(10,2) NOT NULL)SELECT * FROM "Invoice" LIMIT 3; InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total 1 2 2009-01-01 00:00:00 Theodor-Heuss-StraΓe 34 Stuttgart null Germany 70174 1.98 2 4 2009-01-02 00:00:00 UllevΓ₯lsveien 14 Oslo null Norway 0171 3.96 3 8 2009-01-03 00:00:00 GrΓ©trystraat 63 Brussels null Belgium 1000 5.94CREATE TABLE InvoiceLine ( InvoiceLineId INTEGER NOT NULL, InvoiceId INTEGER NOT NULL, TrackId INTEGER NOT NULL, UnitPrice NUMERIC(10,2) NOT NULL, Quantity INTEGER NOT NULL)SELECT * FROM "InvoiceLine" LIMIT 3; InvoiceLineId InvoiceId TrackId UnitPrice Quantity 1 1 2 0.99 1 2 1 4 0.99 1 3 2 6 0.99 1CREATE TABLE MediaType ( MediaTypeId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "MediaType" LIMIT 3; MediaTypeId Name 1 MPEG audio file 2 Protected AAC audio file 3 Protected MPEG-4 video fileCREATE TABLE Playlist ( PlaylistId INTEGER NOT NULL, Name NVARCHAR(120))SELECT * FROM "Playlist" LIMIT 3; PlaylistId Name 1 Music 2 Movies 3 TV ShowsCREATE TABLE PlaylistTrack ( PlaylistId INTEGER NOT NULL, TrackId INTEGER NOT NULL)SELECT * FROM "PlaylistTrack" LIMIT 3; PlaylistId TrackId 1 3402 1 3389 1 3390CREATE TABLE Track ( TrackId INTEGER NOT NULL, Name NVARCHAR(200) NOT NULL, AlbumId INTEGER, MediaTypeId INTEGER NOT NULL, GenreId INTEGER, Composer NVARCHAR(220), Milliseconds INTEGER NOT NULL, Bytes INTEGER, UnitPrice NUMERIC(10,2) NOT NULL)SELECT * FROM "Track" LIMIT 3; TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young,Malcolm Young,Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 U. Dirkschneider,W. Hoffmann,H. Frank,P. Baltes,S. Kaufmann,G. Hoffmann 342562 5510424 0.99 3 Fast As a Shark 3 2 1 F. Baltes,S. Kaufman,U. Dirkscneider & W. Hoffman 230619 3990994 0.99 */
#### API Reference:
Few-shot examples[β](#few-shot-examples "Direct link to Few-shot examples")
---------------------------------------------------------------------------
Including examples of natural language questions being converted to valid SQL queries against our database in the prompt will often improve model performance, especially for complex queries.
Let's say we have the following examples:
export const examples = [ { input: "List all artists.", query: "SELECT * FROM Artist;" }, { input: "Find all albums for the artist 'AC/DC'.", query: "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');", }, { input: "List all tracks in the 'Rock' genre.", query: "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');", }, { input: "Find the total duration of all tracks.", query: "SELECT SUM(Milliseconds) FROM Track;", }, { input: "List all customers from Canada.", query: "SELECT * FROM Customer WHERE Country = 'Canada';", }, { input: "How many tracks are there in the album with ID 5?", query: "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;", }, { input: "Find the total number of invoices.", query: "SELECT COUNT(*) FROM Invoice;", }, { input: "List all tracks that are longer than 5 minutes.", query: "SELECT * FROM Track WHERE Milliseconds > 300000;", }, { input: "Who are the top 5 customers by total purchase?", query: "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;", }, { input: "Which albums are from the year 2000?", query: "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';", }, { input: "How many employees are there", query: 'SELECT COUNT(*) FROM "Employee"', },];
#### API Reference:
We can create a few-shot prompt with them like so:
import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";import { examples } from "./examples.js";const examplePrompt = PromptTemplate.fromTemplate( `User input: {input}\nSQL Query: {query}`);const prompt = new FewShotPromptTemplate({ examples: examples.slice(0, 5), examplePrompt, prefix: `You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than {top_k} rows.Here is the relevant table info: {table_info}Below are a number of examples of questions and their corresponding SQL queries.`, suffix: "User input: {input}\nSQL query: ", inputVariables: ["input", "top_k", "table_info"],});console.log( await prompt.format({ input: "How many artists are there?", top_k: "3", table_info: "foo", }));/**You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than 3 rows.Here is the relevant table info: fooBelow are a number of examples of questions and their corresponding SQL queries.User input: List all artists.SQL Query: SELECT * FROM Artist;User input: Find all albums for the artist 'AC/DC'.SQL Query: SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');User input: List all tracks in the 'Rock' genre.SQL Query: SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');User input: Find the total duration of all tracks.SQL Query: SELECT SUM(Milliseconds) FROM Track;User input: List all customers from Canada.SQL Query: SELECT * FROM Customer WHERE Country = 'Canada';User input: How many artists are there?SQL query: */
#### API Reference:
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Dynamic few-shot examples[β](#dynamic-few-shot-examples "Direct link to Dynamic few-shot examples")
---------------------------------------------------------------------------------------------------
If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.
We can do just this using an ExampleSelector. In this case we'll use a [`SemanticSimilarityExampleSelector`](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones:
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";import { createSqlQueryChain } from "langchain/chains/sql_db";import { examples } from "./examples.js";import { db } from "../db.js";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples< typeof MemoryVectorStore>(examples, new OpenAIEmbeddings(), MemoryVectorStore, { k: 5, inputKeys: ["input"],});console.log( await exampleSelector.selectExamples({ input: "how many artists are there?" }));/**[ { input: 'List all artists.', query: 'SELECT * FROM Artist;' }, { input: 'How many employees are there', query: 'SELECT COUNT(*) FROM "Employee"' }, { input: 'How many tracks are there in the album with ID 5?', query: 'SELECT COUNT(*) FROM Track WHERE AlbumId = 5;' }, { input: 'Which albums are from the year 2000?', query: "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';" }, { input: "List all tracks in the 'Rock' genre.", query: "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');" }] */// To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:const examplePrompt = PromptTemplate.fromTemplate( `User input: {input}\nSQL Query: {query}`);const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, prefix: `You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than {top_k} rows.Here is the relevant table info: {table_info}Below are a number of examples of questions and their corresponding SQL queries.`, suffix: "User input: {input}\nSQL query: ", inputVariables: ["input", "top_k", "table_info"],});console.log( await prompt.format({ input: "How many artists are there?", top_k: "3", table_info: "foo", }));/**You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run.Unless otherwise specified, do not return more than 3 rows.Here is the relevant table info: fooBelow are a number of examples of questions and their corresponding SQL queries.User input: List all artists.SQL Query: SELECT * FROM Artist;User input: How many employees are thereSQL Query: SELECT COUNT(*) FROM "Employee"User input: How many tracks are there in the album with ID 5?SQL Query: SELECT COUNT(*) FROM Track WHERE AlbumId = 5;User input: Which albums are from the year 2000?SQL Query: SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';User input: List all tracks in the 'Rock' genre.SQL Query: SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');User input: How many artists are there?SQL query: */// Now we can use it in a chain:const llm = new ChatOpenAI({ temperature: 0,});const chain = await createSqlQueryChain({ db, llm, prompt, dialect: "sqlite",});console.log(await chain.invoke({ question: "how many artists are there?" }));/**SELECT COUNT(*) FROM Artist; */
#### API Reference:
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [createSqlQueryChain](https://api.js.langchain.com/functions/langchain_chains_sql_db.createSqlQueryChain.html) from `langchain/chains/sql_db`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Agents
](/v0.1/docs/use_cases/sql/agents/)[
Next
Query validation
](/v0.1/docs/use_cases/sql/query_checking/)
* [Setup](#setup)
* [Dialect-specific prompting](#dialect-specific-prompting)
* [Table definitions and example rows](#table-definitions-and-example-rows)
* [Few-shot examples](#few-shot-examples)
* [Dynamic few-shot examples](#dynamic-few-shot-examples)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Quickstart
On this page
Quickstart
==========
This example will show how to use query analysis in a basic end-to-end example. This will cover creating a simple index, showing a failure mode that occur when passing a raw user question to that index, and then an example of how query analysis can help address that issue. There are MANY different query analysis techniques and this end-to-end example will not show all of them.
For the purpose of this example, we will do retrieval over the LangChain YouTube videos.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai youtubei.js chromadb youtube-transcript
yarn add langchain @langchain/community @langchain/openai youtubei.js chromadb youtube-transcript
pnpm add langchain @langchain/community @langchain/openai youtubei.js chromadb youtube-transcript
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
Weβll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Load documents[β](#load-documents "Direct link to Load documents")
We can use the `YouTubeLoader` to load transcripts of a few LangChain videos:
import { DocumentInterface } from "@langchain/core/documents";import { YoutubeLoader } from "langchain/document_loaders/web/youtube";import { getYear } from "date-fns";const urls = [ "https://www.youtube.com/watch?v=HAn9vnJy6S4", "https://www.youtube.com/watch?v=dA1cHGACXCo", "https://www.youtube.com/watch?v=ZcEMLz27sL4", "https://www.youtube.com/watch?v=hvAPnpSfSGo", "https://www.youtube.com/watch?v=EhlPDL4QrWY", "https://www.youtube.com/watch?v=mmBo8nlu2j0", "https://www.youtube.com/watch?v=rQdibOsL1ps", "https://www.youtube.com/watch?v=28lC4fqukoc", "https://www.youtube.com/watch?v=es-9MgxB-uc", "https://www.youtube.com/watch?v=wLRHwKuKvOE", "https://www.youtube.com/watch?v=ObIltMaRJvY", "https://www.youtube.com/watch?v=DjuXACWYkkU", "https://www.youtube.com/watch?v=o7C9ld6Ln-M",];let docs: Array<DocumentInterface> = [];for await (const url of urls) { const doc = await YoutubeLoader.createFromUrl(url, { language: "en", addVideoInfo: true, }).load(); docs = docs.concat(doc);}console.log(docs.length);/*13 */// Add some additional metadata: what year the video was published// The JS API does not provide publish date, so we can use a// hardcoded array with the dates instead.const dates = [ new Date("Jan 31, 2024"), new Date("Jan 26, 2024"), new Date("Jan 24, 2024"), new Date("Jan 23, 2024"), new Date("Jan 16, 2024"), new Date("Jan 5, 2024"), new Date("Jan 2, 2024"), new Date("Dec 20, 2023"), new Date("Dec 19, 2023"), new Date("Nov 27, 2023"), new Date("Nov 22, 2023"), new Date("Nov 16, 2023"), new Date("Nov 2, 2023"),];docs.forEach((doc, idx) => { // eslint-disable-next-line no-param-reassign doc.metadata.publish_year = getYear(dates[idx]); // eslint-disable-next-line no-param-reassign doc.metadata.publish_date = dates[idx];});// Here are the titles of the videos we've loaded:console.log(docs.map((doc) => doc.metadata.title));/*[ 'OpenGPTs', 'Building a web RAG chatbot: using LangChain, Exa (prev. Metaphor), LangSmith, and Hosted Langserve', 'Streaming Events: Introducing a new `stream_events` method', 'LangGraph: Multi-Agent Workflows', 'Build and Deploy a RAG app with Pinecone Serverless', 'Auto-Prompt Builder (with Hosted LangServe)', 'Build a Full Stack RAG App With TypeScript', 'Getting Started with Multi-Modal LLMs', 'SQL Research Assistant', 'Skeleton-of-Thought: Building a New Template from Scratch', 'Benchmarking RAG over LangChain Docs', 'Building a Research Assistant from Scratch', 'LangServe and LangChain Templates Webinar'] */
#### API Reference:
* [DocumentInterface](https://api.js.langchain.com/interfaces/langchain_core_documents.DocumentInterface.html) from `@langchain/core/documents`
* [YoutubeLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_youtube.YoutubeLoader.html) from `langchain/document_loaders/web/youtube`
Hereβs the metadata associated with each video.
We can see that each document also has a title, view count, publication date, and length:
import { getDocs } from "./docs.js";const docs = await getDocs();console.log(docs[0].metadata);/**{ source: 'HAn9vnJy6S4', description: 'OpenGPTs is an open-source platform aimed at recreating an experience like the GPT Store - but with any model, any tools, and that you can self-host.\n' + '\n' + 'This video covers both how to use it as well as how to build it.\n' + '\n' + 'GitHub: https://github.com/langchain-ai/opengpts', title: 'OpenGPTs', view_count: 7262, author: 'LangChain'} */// And here's a sample from a document's contents:console.log(docs[0].pageContent.slice(0, 500));/*hello today I want to talk about open gpts open gpts is a project that we built here at linkchain uh that replicates the GPT store in a few ways so it creates uh end user-facing friendly interface to create different Bots and these Bots can have access to different tools and they can uh be given files to retrieve things over and basically it's a way to create a variety of bots and expose the configuration of these Bots to end users it's all open source um it can be used with open AI it can be us */
#### API Reference:
### Indexing documents[β](#indexing-documents "Direct link to Indexing documents")
Whenever we perform retrieval we need to create an index of documents that we can query. Weβll use a vector store to index our documents, and weβll chunk them first to make our retrievals more concise and precise:
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import { OpenAIEmbeddings } from "@langchain/openai";import { Chroma } from "@langchain/community/vectorstores/chroma";import { getDocs } from "./docs.js";const docs = await getDocs();const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 2000 });const chunkedDocs = await textSplitter.splitDocuments(docs);const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small",});const vectorStore = await Chroma.fromDocuments(chunkedDocs, embeddings, { collectionName: "yt-videos",});
#### API Reference:
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Chroma](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
Then later, you can retrieve the index without having to re-query and embed:
import "chromadb";import { OpenAIEmbeddings } from "@langchain/openai";import { Chroma } from "@langchain/community/vectorstores/chroma";const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small",});const vectorStore = await Chroma.fromExistingCollection(embeddings, { collectionName: "yt-videos",});
[Module: null prototype] { AdminClient: [class AdminClient], ChromaClient: [class ChromaClient], CloudClient: [class CloudClient extends ChromaClient], CohereEmbeddingFunction: [class CohereEmbeddingFunction], Collection: [class Collection], DefaultEmbeddingFunction: [class _DefaultEmbeddingFunction], GoogleGenerativeAiEmbeddingFunction: [class _GoogleGenerativeAiEmbeddingFunction], HuggingFaceEmbeddingServerFunction: [class HuggingFaceEmbeddingServerFunction], IncludeEnum: { Documents: "documents", Embeddings: "embeddings", Metadatas: "metadatas", Distances: "distances" }, JinaEmbeddingFunction: [class JinaEmbeddingFunction], OpenAIEmbeddingFunction: [class _OpenAIEmbeddingFunction], TransformersEmbeddingFunction: [class _TransformersEmbeddingFunction]}
Retrieval without query analysis[β](#retrieval-without-query-analysis "Direct link to Retrieval without query analysis")
------------------------------------------------------------------------------------------------------------------------
We can perform similarity search on a user question directly to find chunks relevant to the question:
const searchResults = await vectorStore.similaritySearch( "how do I build a RAG agent");console.log(searchResults[0].metadata.title);console.log(searchResults[0].pageContent.slice(0, 500));
OpenGPTshardcoded that it will always do a retrieval step here the assistant decides whether to do a retrieval step or not sometimes this is good sometimes this is bad sometimes it you don't need to do a retrieval step when I said hi it didn't need to call it tool um but other times you know the the llm might mess up and not realize that it needs to do a retrieval step and so the rag bot will always do a retrieval step so it's more focused there because this is also a simpler architecture so it's always
This works pretty okay! Our first result is somewhat relevant to the question.
What if we wanted to search for results from a specific time period?
const searchResults = await vectorStore.similaritySearch( "videos on RAG published in 2023");console.log(searchResults[0].metadata.title);console.log(searchResults[0].metadata.publish_year);console.log(searchResults[0].pageContent.slice(0, 500));
OpenGPTs2024hardcoded that it will always do a retrieval step here the assistant decides whether to do a retrieval step or not sometimes this is good sometimes this is bad sometimes it you don't need to do a retrieval step when I said hi it didn't need to call it tool um but other times you know the the llm might mess up and not realize that it needs to do a retrieval step and so the rag bot will always do a retrieval step so it's more focused there because this is also a simpler architecture so it's always
Our first result is from 2024, and not very relevant to the input. Since weβre just searching against document contents, thereβs no way for the results to be filtered on any document attributes.
This is just one failure mode that can arise. Letβs now take a look at how a basic form of query analysis can fix it!
Query analysis[β](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
To handle these failure modes weβll do some query structuring. This will involve defining a **query schema** that contains some date filters and use a function-calling model to convert a user question into a structured queries.
### Query schema[β](#query-schema "Direct link to Query schema")
In this case weβll have explicit min and max attributes for publication date so that it can be filtered on.
import { z } from "zod";const searchSchema = z .object({ query: z .string() .describe("Similarity search query applied to video transcripts."), publish_year: z.number().optional().describe("Year of video publication."), }) .describe( "Search over a database of tutorial videos about a software library." );
### Query generation[β](#query-generation "Direct link to Query generation")
To convert user questions to structured queries weβll make use of OpenAIβs function-calling API. Specifically weβll use the new [ChatModel.withStructuredOutput()](https://api.js.langchain.com/classes/langchain_core_language_models_base.BaseLanguageModel.html#withStructuredOutput) constructor to handle passing the schema to the model and parsing the output.
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Given a question, return a list of database queries optimized to retrieve the most relevant results.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0,});const structuredLLM = llm.withStructuredOutput(searchSchema, { name: "search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, structuredLLM,]);
Letβs see what queries our analyzer generates for the questions we searched earlier:
console.log(await queryAnalyzer.invoke("How do I build a rag agent"));
{ query: "build a rag agent" }
console.log(await queryAnalyzer.invoke("videos on RAG published in 2023"));
{ query: "RAG", publish_year: 2023 }
Retrieval with query analysis[β](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
Our query analysis looks pretty good; now letβs try using our generated queries to actually perform retrieval.
**Note:** in our example, we specified `tool_choice: "Search"`. This will force the LLM to call one - and only one - function, meaning that we will always have one optimized query to look up. Note that this is not always the case - see other guides for how to deal with situations when no - or multiple - optimized queries are returned.
import { DocumentInterface } from "@langchain/core/documents";const retrieval = async (input: { query: string; publish_year?: number;}): Promise<DocumentInterface[]> => { let _filter: Record<string, any> = {}; if (input.publish_year) { // This syntax is specific to Chroma // the vector database we are using. _filter = { publish_year: { $eq: input.publish_year, }, }; } return vectorStore.similaritySearch(input.query, undefined, _filter);};
import { RunnableLambda } from "@langchain/core/runnables";const retrievalChain = queryAnalyzer.pipe( new RunnableLambda({ func: async (input) => retrieval(input as unknown as { query: string; publish_year?: number }), }));
We can now run this chain on the problematic input from before, and see that it yields only results from that year!
const results = await retrievalChain.invoke("RAG tutorial published in 2023");
console.log( results.map((doc) => ({ title: doc.metadata.title, year: doc.metadata.publish_date, })));
[ { title: "Getting Started with Multi-Modal LLMs", year: "2023-12-20T08:00:00.000Z" }, { title: "LangServe and LangChain Templates Webinar", year: "2023-11-02T07:00:00.000Z" }, { title: "Getting Started with Multi-Modal LLMs", year: "2023-12-20T08:00:00.000Z" }, { title: "Building a Research Assistant from Scratch", year: "2023-11-16T08:00:00.000Z" }]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Query analysis
](/v0.1/docs/use_cases/query_analysis/)[
Next
Decomposition
](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Setup](#setup)
* [Load documents](#load-documents)
* [Indexing documents](#indexing-documents)
* [Retrieval without query analysis](#retrieval-without-query-analysis)
* [Query analysis](#query-analysis)
* [Query schema](#query-schema)
* [Query generation](#query-generation)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/decomposition/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Decomposition](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Expansion](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Hypothetical Document Embeddings](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Routing](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Step Back Prompting](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Structuring](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Techniques
* Decomposition
On this page
Decomposition
=============
When a user asks a question there is no guarantee that the relevant results can be returned with a single query. Sometimes to answer a question we need to split it into distinct sub-questions, retrieve results for each sub-question, and then answer using the cumulative context.
For example if a user asks: βHow is Web Voyager different from reflection agentsβ, and we have one document that explains Web Voyager and one that explains reflection agents but no document that compares the two, then weβd likely get better results by retrieving for both βWhat is Web Voyagerβ and βWhat are reflection agentsβ and combining the retrieved documents than by retrieving based on the user question directly.
This process of splitting an input into multiple distinct sub-queries is what we refer to as **query decomposition**. It is also sometimes referred to as sub-query generation. In this guide weβll walk through an example of how to do decomposition, using our example of a Q&A bot over the LangChain YouTube videos from the [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod uuid
yarn add @langchain/core zod uuid
pnpm add @langchain/core zod uuid
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Query generation[β](#query-generation "Direct link to Query generation")
------------------------------------------------------------------------
To convert user questions to a list of sub questions weβll use a LLM function-calling API, which can return multiple functions each turn:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { z } from "zod";const subQuerySchema = z .object({ subQuery: z.array( z.string().describe("A very specific query against the database") ), }) .describe( "Search over a database of tutorial videos about a software library" );
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Perform query decomposition. Given a user question, break it down into distinct sub questions thatyou need to answer in order to answer the original question.If there are acronyms or words you are not familiar with, do not try to rephrase them.If the query is already well formed, do not try to decompose it further.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], new MessagesPlaceholder({ variableName: "examples", optional: true, }), ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(subQuerySchema, { name: "SubQuery",});const queryAnalyzer = prompt.pipe(llmWithTools);
Letβs try it out with a simple question:
await queryAnalyzer.invoke({ question: "how to do rag" });
{ subQuery: [ "How to do rag" ] }
Now with two slightly more involved questions:
await queryAnalyzer.invoke({ question: "how to use multi-modal models in a chain and turn chain into a rest api",});
{ subQuery: [ "How to use multi-modal models in a chain", "How to turn a chain into a REST API" ]}
await queryAnalyzer.invoke({ question: "what's the difference between web voyager and reflection agents? do they use langgraph?",});
{ subQuery: [ "Difference between Web Voyager and Reflection Agents", "Do Web Voyager and Reflection Agents use LangGraph?" ]}
Adding examples and tuning the prompt[β](#adding-examples-and-tuning-the-prompt "Direct link to Adding examples and tuning the prompt")
---------------------------------------------------------------------------------------------------------------------------------------
This works pretty well, but we probably want it to decompose the last question even further to separate the queries about Web Voyager and Reflection Agents. If we arenβt sure up front what types of queries will do best with our index, we can also intentionally include some redundancy in our queries, so that we return both sub queries and higher level queries.
To tune our query generation results, we can add some examples of inputs questions and gold standard output queries to our prompt. We can also try to improve our system message.
const examples: Array<Record<string, any>> = [];
const question = "What's chat langchain, is it a langchain template?";const query = { query: "What's chat langchain, is it a langchain template?", subQueries: [ "What is chat langchain", "Is chat langchain a langchain template", ],};examples.push({ input: question, toolCalls: [query] });
1
const question = "How would I use LangGraph to build an automaton";const query = { query: "How would I use LangGraph to build an automaton", subQueries: ["How to build automaton with LangGraph"],};examples.push({ input: question, toolCalls: [query] });
2
const question = "How to build multi-agent system and stream intermediate steps from it";const query = { query: "How to build multi-agent system and stream intermediate steps from it", subQueries: [ "How to build multi-agent system", "How to stream intermediate steps", "How to stream intermediate steps from multi-agent system", ],};examples.push({ input: question, toolCalls: [query] });
3
const question = "What's the difference between LangChain agents and LangGraph?";const query = { query: "What's the difference between LangChain agents and LangGraph?", subQueries: [ "What's the difference between LangChain agents and LangGraph?", "What are LangChain agents", "What is LangGraph", ],};examples.push({ input: question, toolCalls: [query] });
4
Now we need to update our prompt template and chain so that the examples are included in each prompt. Since weβre working with LLM model function-calling, weβll need to do a bit of extra structuring to send example inputs and outputs to the model. Weβll create a `toolExampleToMessages` helper function to handle this for us:
import { v4 as uuidV4 } from "uuid";import { AIMessage, BaseMessage, HumanMessage, SystemMessage, ToolMessage,} from "@langchain/core/messages";const toolExampleToMessages = ( example: Record<string, any>): Array<BaseMessage> => { const messages: Array<BaseMessage> = [ new HumanMessage({ content: example.input }), ]; const openaiToolCalls = example.toolCalls.map((toolCall) => { return { id: uuidV4(), type: "function" as const, function: { name: "SubQuery", arguments: JSON.stringify(toolCall), }, }; }); messages.push( new AIMessage({ content: "", additional_kwargs: { tool_calls: openaiToolCalls }, }) ); const toolOutputs = "toolOutputs" in example ? example.toolOutputs : Array(openaiToolCalls.length).fill( "This is an example of a correct usage of this tool. Make sure to continue using the tool this way." ); toolOutputs.forEach((output, index) => { messages.push( new ToolMessage({ content: output, tool_call_id: openaiToolCalls[index].id, }) ); }); return messages;};const exampleMessages = examples.map((ex) => toolExampleToMessages(ex)).flat();
import { MessagesPlaceholder } from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Perform query decomposition. Given a user question, break it down into the most specific sub questions you canwhich will help you answer the original question. Each sub question should be about a single concept/fact/idea.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], new MessagesPlaceholder({ variableName: "examples", optional: true }), ["human", "{question}"],]);const queryAnalyzerWithExamples = RunnableSequence.from([ { question: new RunnablePassthrough(), examples: () => exampleMessages, }, prompt, llmWithTools,]);
await queryAnalyzerWithExamples.invoke( "what's the difference between web voyager and reflection agents? do they use langgraph?");
{ query: "what's the difference between web voyager and reflection agents? do they use langgraph?", subQueries: [ "What's the difference between web voyager and reflection agents", "Do web voyager and reflection agents use LangGraph" ]}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/use_cases/query_analysis/quickstart/)[
Next
Expansion
](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Setup](#setup)
* [Query generation](#query-generation)
* [Adding examples and tuning the prompt](#adding-examples-and-tuning-the-prompt)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Add Examples to the Prompt](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Handle Cases Where No Queries are Generated](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)
* [Handle Multiple Queries](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)
* [Handle Multiple Retrievers](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)
* [Construct Filters](/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/)
* [Deal with High Cardinality Categoricals](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* How-To Guides
* Handle Multiple Queries
On this page
Handle Multiple Queries
=======================
Sometimes, a query analysis technique may allow for multiple queries to be generated. In these cases, we need to remember to run all queries and then to combine the results. We will show a simple example (using mock data) of how to do that.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core @langchain/community @langchain/openai zod chromadb
yarn add @langchain/core @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/core @langchain/community @langchain/openai zod chromadb
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Create Index[β](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho", "Ankush worked at Facebook"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "multi_query",});const retriever = vectorstore.asRetriever(1);
[Module: null prototype] { AdminClient: [class AdminClient], ChromaClient: [class ChromaClient], CloudClient: [class CloudClient extends ChromaClient], CohereEmbeddingFunction: [class CohereEmbeddingFunction], Collection: [class Collection], DefaultEmbeddingFunction: [class _DefaultEmbeddingFunction], GoogleGenerativeAiEmbeddingFunction: [class _GoogleGenerativeAiEmbeddingFunction], HuggingFaceEmbeddingServerFunction: [class HuggingFaceEmbeddingServerFunction], IncludeEnum: { Documents: "documents", Embeddings: "embeddings", Metadatas: "metadatas", Distances: "distances" }, JinaEmbeddingFunction: [class JinaEmbeddingFunction], OpenAIEmbeddingFunction: [class _OpenAIEmbeddingFunction], TransformersEmbeddingFunction: [class _TransformersEmbeddingFunction]}
Query analysis[β](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. We will let it return multiple queries.
import { z } from "zod";const searchSchema = z .object({ queries: z.array(z.string()).describe("Distinct queries to search for"), }) .describe("Search over a database of job records.");
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.If you need to look up two distinct pieces of information, you are allowed to do that!`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that this allows for creating multiple queries
await queryAnalyzer.invoke("where did Harrison Work");
{ queries: [ "Harrison" ] }
await queryAnalyzer.invoke("where did Harrison and ankush Work");
{ queries: [ "Harrison work", "Ankush work" ] }
Retrieval with query analysis[β](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time.
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const docs = []; for (const query of response.queries) { const newDocs = await retriever.invoke(query, config); docs.push(...newDocs); } // You probably want to think about reranking or deduplicating documents here // But that is a separate topic return docs;};const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("where did Harrison and ankush Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} }, Document { pageContent: "Ankush worked at Facebook", metadata: {} }]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Handle Cases Where No Queries are Generated
](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)[
Next
Handle Multiple Retrievers
](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)
* [Setup](#setup)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/how_to/few_shot/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Add Examples to the Prompt](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Handle Cases Where No Queries are Generated](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)
* [Handle Multiple Queries](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)
* [Handle Multiple Retrievers](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)
* [Construct Filters](/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/)
* [Deal with High Cardinality Categoricals](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* How-To Guides
* Add Examples to the Prompt
On this page
Add Examples to the Prompt
==========================
As our query analysis becomes more complex, the LLM may struggle to understand how exactly it should respond in certain scenarios. In order to improve performance here, we can add examples to the prompt to guide the LLM.
Letβs take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod uuid
yarn add @langchain/core zod uuid
pnpm add @langchain/core zod uuid
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Query schema[β](#query-schema "Direct link to Query schema")
------------------------------------------------------------
Weβll define a query schema that we want our model to output. To make our query analysis a bit more interesting, weβll add a `subQueries` field that contains more narrow questions derived from the top level question.
import { z } from "zod";const subQueriesDescription = `If the original question contains multiple distinct sub-questions,or if there are more generic questions that would be helpful to answer inorder to answer the original question, write a list of all relevant sub-questions.Make sure this list is comprehensive and covers all parts of the original question.It's ok if there's redundancy in the sub-questions, it's better to cover all the bases than to miss some.Make sure the sub-questions are as narrowly focused as possible in order to get the most relevant results.`;const searchSchema = z.object({ query: z .string() .describe("Primary similarity search query applied to video transcripts."), subQueries: z.array(z.string()).optional().describe(subQueriesDescription), publishYear: z.number().optional().describe("Year video was published"),});
Query generation[β](#query-generation "Direct link to Query generation")
------------------------------------------------------------------------
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Given a question, return a list of database queries optimized to retrieve the most relevant results.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], new MessagesPlaceholder({ variableName: "examples", optional: true, }), ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
Letβs try out our query analyzer without any examples in the prompt:
await queryAnalyzer.invoke( "what's the difference between web voyager and reflection agents? do both use langgraph?");
{ query: "difference between Web Voyager and Reflection Agents", subQueries: [ "Do Web Voyager and Reflection Agents use LangGraph?" ]}
Adding examples and tuning the prompt[β](#adding-examples-and-tuning-the-prompt "Direct link to Adding examples and tuning the prompt")
---------------------------------------------------------------------------------------------------------------------------------------
This works pretty well, but we probably want it to decompose the question even further to separate the queries about Web Voyager and Reflection Agents.
To tune our query generation results, we can add some examples of inputs questions and gold standard output queries to our prompt.
const examples = [];
const question = "What's chat langchain, is it a langchain template?";const query = { query: "What is chat langchain and is it a langchain template?", subQueries: ["What is chat langchain", "What is a langchain template"],};examples.push({ input: question, toolCalls: [query] });
1
const question = "How to build multi-agent system and stream intermediate steps from it";const query = { query: "How to build multi-agent system and stream intermediate steps from it", subQueries: [ "How to build multi-agent system", "How to stream intermediate steps from multi-agent system", "How to stream intermediate steps", ],};examples.push({ input: question, toolCalls: [query] });
2
const question = "LangChain agents vs LangGraph?";const query = { query: "What's the difference between LangChain agents and LangGraph? How do you deploy them?", subQueries: [ "What are LangChain agents", "What is LangGraph", "How do you deploy LangChain agents", "How do you deploy LangGraph", ],};examples.push({ input: question, toolCalls: [query] });
3
Now we need to update our prompt template and chain so that the examples are included in each prompt. Since weβre working with LLM model function-calling, weβll need to do a bit of extra structuring to send example inputs and outputs to the model. Weβll create a `toolExampleToMessages` helper function to handle this for us:
import { AIMessage, BaseMessage, HumanMessage, SystemMessage, ToolMessage,} from "@langchain/core/messages";import { v4 as uuidV4 } from "uuid";const toolExampleToMessages = ( example: Example | Record<string, any>): Array<BaseMessage> => { const messages: Array<BaseMessage> = [ new HumanMessage({ content: example.input }), ]; const openaiToolCalls = example.toolCalls.map((toolCall) => { return { id: uuidV4(), type: "function" as const, function: { name: "search", arguments: JSON.stringify(toolCall), }, }; }); messages.push( new AIMessage({ content: "", additional_kwargs: { tool_calls: openaiToolCalls }, }) ); const toolOutputs = "toolOutputs" in example ? example.toolOutputs : Array(openaiToolCalls.length).fill( "You have correctly called this tool." ); toolOutputs.forEach((output, index) => { messages.push( new ToolMessage({ content: output, tool_call_id: openaiToolCalls[index].id, }) ); }); return messages;};const exampleMessages = examples.map((ex) => toolExampleToMessages(ex)).flat();
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";const queryAnalyzerWithExamples = RunnableSequence.from([ { question: new RunnablePassthrough(), examples: () => exampleMessages, }, prompt, llmWithTools,]);
await queryAnalyzerWithExamples.invoke( "what's the difference between web voyager and reflection agents? do both use langgraph?");
{ query: "Difference between Web Voyager and Reflection agents, do they both use LangGraph?", subQueries: [ "Difference between Web Voyager and Reflection agents", "Do Web Voyager and Reflection agents use LangGraph" ]}
Thanks to our examples we get a slightly more decomposed search query. With some more prompt engineering and tuning of our examples we could improve query generation even more.
You can see that the examples are passed to the model as messages in the [LangSmith trace](https://smith.langchain.com/public/102829c3-69fc-4cb7-b28b-399ae2c9c008/r).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Structuring
](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)[
Next
Handle Cases Where No Queries are Generated
](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)
* [Setup](#setup)
* [Query schema](#query-schema)
* [Query generation](#query-generation)
* [Adding examples and tuning the prompt](#adding-examples-and-tuning-the-prompt)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Add Examples to the Prompt](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Handle Cases Where No Queries are Generated](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)
* [Handle Multiple Queries](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)
* [Handle Multiple Retrievers](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)
* [Construct Filters](/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/)
* [Deal with High Cardinality Categoricals](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* How-To Guides
* Handle Multiple Retrievers
On this page
Handle Multiple Retrievers
==========================
Sometimes, a query analysis technique may allow for selection of which retriever to use. To use this, you will need to add some logic to select the retriever to do. We will show a simple example (using mock data) of how to do that.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core @langchain/community @langchain/openai zod chromadb
yarn add @langchain/core @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/core @langchain/community @langchain/openai zod chromadb
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Create Index[β](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison",});const retrieverHarrison = vectorstore.asRetriever(1);
[Module: null prototype] { AdminClient: [class AdminClient], ChromaClient: [class ChromaClient], CloudClient: [class CloudClient extends ChromaClient], CohereEmbeddingFunction: [class CohereEmbeddingFunction], Collection: [class Collection], DefaultEmbeddingFunction: [class _DefaultEmbeddingFunction], GoogleGenerativeAiEmbeddingFunction: [class _GoogleGenerativeAiEmbeddingFunction], HuggingFaceEmbeddingServerFunction: [class HuggingFaceEmbeddingServerFunction], IncludeEnum: { Documents: "documents", Embeddings: "embeddings", Metadatas: "metadatas", Distances: "distances" }, JinaEmbeddingFunction: [class JinaEmbeddingFunction], OpenAIEmbeddingFunction: [class _OpenAIEmbeddingFunction], TransformersEmbeddingFunction: [class _TransformersEmbeddingFunction]}
const texts = ["Ankush worked at Facebook"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "ankush",});const retrieverAnkush = vectorstore.asRetriever(1);
Query analysis[β](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. We will let it return multiple queries.
import { z } from "zod";const searchSchema = z.object({ query: z.string().describe("Query to look up"), person: z .string() .describe( "Person to look things up for. Should be `HARRISON` or `ANKUSH`." ),});
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(searchSchema, { name: "Search",});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that this allows for routing between retrievers
await queryAnalyzer.invoke("where did Harrison Work");
{ query: "workplace of Harrison", person: "HARRISON" }
await queryAnalyzer.invoke("where did ankush Work");
{ query: "Workplace of Ankush", person: "ANKUSH" }
Retrieval with query analysis[β](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? We just need some simple logic to select the retriever and pass in the search query
const retrievers = { HARRISON: retrieverHarrison, ANKUSH: retrieverAnkush,};
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); const retriever = retrievers[response.person]; return retriever.invoke(response.query, config);};const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("where did ankush Work");
[ Document { pageContent: "Ankush worked at Facebook", metadata: {} } ]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Handle Multiple Queries
](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)[
Next
Construct Filters
](/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/)
* [Setup](#setup)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/how_to/no_queries/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Add Examples to the Prompt](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Handle Cases Where No Queries are Generated](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)
* [Handle Multiple Queries](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)
* [Handle Multiple Retrievers](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)
* [Construct Filters](/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/)
* [Deal with High Cardinality Categoricals](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* How-To Guides
* Handle Cases Where No Queries are Generated
On this page
Handle Cases Where No Queries are Generated
===========================================
Sometimes, a query analysis technique may allow for any number of queries to be generated - including no queries! In this case, our overall chain will need to inspect the result of the query analysis before deciding whether to call the retriever or not.
We will use mock data for this example.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core @langchain/community @langchain/openai zod chromadb
yarn add @langchain/core @langchain/community @langchain/openai zod chromadb
pnpm add @langchain/core @langchain/community @langchain/openai zod chromadb
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Create Index[β](#create-index "Direct link to Create Index")
We will create a vectorstore over fake information.
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import "chromadb";const texts = ["Harrison worked at Kensho"];const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, { collectionName: "harrison",});const retriever = vectorstore.asRetriever(1);
[Module: null prototype] { AdminClient: [class AdminClient], ChromaClient: [class ChromaClient], CloudClient: [class CloudClient extends ChromaClient], CohereEmbeddingFunction: [class CohereEmbeddingFunction], Collection: [class Collection], DefaultEmbeddingFunction: [class _DefaultEmbeddingFunction], GoogleGenerativeAiEmbeddingFunction: [class _GoogleGenerativeAiEmbeddingFunction], HuggingFaceEmbeddingServerFunction: [class HuggingFaceEmbeddingServerFunction], IncludeEnum: { Documents: "documents", Embeddings: "embeddings", Metadatas: "metadatas", Distances: "distances" }, JinaEmbeddingFunction: [class JinaEmbeddingFunction], OpenAIEmbeddingFunction: [class _OpenAIEmbeddingFunction], TransformersEmbeddingFunction: [class _TransformersEmbeddingFunction]}
Query analysis[β](#query-analysis "Direct link to Query analysis")
------------------------------------------------------------------
We will use function calling to structure the output. However, we will configure the LLM such that is doesnβt NEED to call the function representing a search query (should it decide not to). We will also then use a prompt to do query analysis that explicitly lays when it should and shouldnβt make a search.
import { z } from "zod";const searchSchema = z.object({ query: z.string().describe("Similarity search query applied to job record."),});
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { zodToJsonSchema } from "zod-to-json-schema";import { ChatPromptTemplate } from "@langchain/core/prompts";import { RunnableSequence, RunnablePassthrough,} from "@langchain/core/runnables";const system = `You have the ability to issue search queries to get information to help answer user information.You do not NEED to look things up. If you don't need to, then just respond normally.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.bind({ tools: [ { type: "function" as const, function: { name: "search", description: "Search over a database of job records.", parameters: zodToJsonSchema(searchSchema), }, }, ],});const queryAnalyzer = RunnableSequence.from([ { question: new RunnablePassthrough(), }, prompt, llmWithTools,]);
We can see that by invoking this we get an message that sometimes - but not always - returns a tool call.
await queryAnalyzer.invoke("where did Harrison Work");
AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_uqHm5OMbXBkmqDr7Xzj8EMmd", type: "function", function: [Object] } ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: [ { id: "call_uqHm5OMbXBkmqDr7Xzj8EMmd", type: "function", function: { name: "search", arguments: '{"query":"Harrison"}' } } ] }}
await queryAnalyzer.invoke("hi!");
AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello! How can I assist you today?", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! How can I assist you today?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }}
Retrieval with query analysis[β](#retrieval-with-query-analysis "Direct link to Retrieval with query analysis")
---------------------------------------------------------------------------------------------------------------
So how would we include this in a chain? Letβs look at an example below.
import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";const outputParser = new JsonOutputKeyToolsParser({ keyName: "search",});
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";const chain = async (question: string, config?: RunnableConfig) => { const response = await queryAnalyzer.invoke(question, config); if ( "tool_calls" in response.additional_kwargs && response.additional_kwargs.tool_calls !== undefined ) { const query = await outputParser.invoke(response, config); return retriever.invoke(query[0].query, config); } else { return response; }};const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("hi!");
AIMessage { lc_serializable: true, lc_kwargs: { content: "Hello! How can I assist you today?", additional_kwargs: { function_call: undefined, tool_calls: undefined } }, lc_namespace: [ "langchain_core", "messages" ], content: "Hello! How can I assist you today?", name: undefined, additional_kwargs: { function_call: undefined, tool_calls: undefined }}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Add Examples to the Prompt
](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)[
Next
Handle Multiple Queries
](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)
* [Setup](#setup)
* [Create Index](#create-index)
* [Query analysis](#query-analysis)
* [Retrieval with query analysis](#retrieval-with-query-analysis)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Add Examples to the Prompt](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Handle Cases Where No Queries are Generated](/v0.1/docs/use_cases/query_analysis/how_to/no_queries/)
* [Handle Multiple Queries](/v0.1/docs/use_cases/query_analysis/how_to/multiple_queries/)
* [Handle Multiple Retrievers](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)
* [Construct Filters](/v0.1/docs/use_cases/query_analysis/how_to/constructing_filters/)
* [Deal with High Cardinality Categoricals](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* How-To Guides
* Construct Filters
On this page
Construct Filters
=================
We may want to do query analysis to extract filters to pass into retrievers. One way we ask the LLM to represent these filters is as a Zod schema. There is then the issue of converting that Zod schema into a filter that can be passed into a retriever.
This can be done manually, but LangChain also provides some βTranslatorsβ that are able to translate from a common syntax into filters specific to each retriever. Here, we will cover how to use those translators.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
* npm
* yarn
* pnpm
npm i langchain zod
yarn add langchain zod
pnpm add langchain zod
In this example, `year` and `author` are both attributes to filter on.
import { z } from "zod";const searchSchema = z.object({ query: z.string(), startYear: z.number().optional(), author: z.string().optional(),});
const searchQuery: z.infer<typeof searchSchema> = { query: "RAG", startYear: 2022, author: "LangChain",};
import { Comparison, Comparator } from "langchain/chains/query_constructor/ir";function constructComparisons( query: z.infer<typeof searchSchema>): Comparison[] { const comparisons: Comparison[] = []; if (query.startYear !== undefined) { comparisons.push( new Comparison("gt" as Comparator, "start_year", query.startYear) ); } if (query.author !== undefined) { comparisons.push( new Comparison("eq" as Comparator, "author", query.author) ); } return comparisons;}
const comparisons = constructComparisons(searchQuery);
import { Operation, Operator } from "langchain/chains/query_constructor/ir";const _filter = new Operation("and" as Operator, comparisons);
import { ChromaTranslator } from "langchain/retrievers/self_query/chroma";new ChromaTranslator().visitOperation(_filter);
{ "$and": [ { start_year: { "$gt": 2022 } }, { author: { "$eq": "LangChain" } } ]}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Handle Multiple Retrievers
](/v0.1/docs/use_cases/query_analysis/how_to/multiple_retrievers/)[
Next
Deal with High Cardinality Categoricals
](/v0.1/docs/use_cases/query_analysis/how_to/high_cardinality/)
* [Setup](#setup)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/extraction/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Quickstart](/v0.1/docs/use_cases/extraction/quickstart/)
* [How-To Guides](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* Quickstart
On this page
Quickstart
==========
In this quick start, we will use LLMs that are capable of **function/tool calling** to extract information from text.
info
Extraction using **function/tool calling** only works with [models that support **function/tool calling**](/v0.1/docs/modules/model_io/chat/function_calling/).
Set up[β](#set-up "Direct link to Set up")
------------------------------------------
We will use the new [withStructuredOutput()](/v0.1/docs/integrations/chat/) method available on LLMs that are capable of **function/tool calling**, along with the popular and intuitive [Zod](https://zod.dev/) typing library.
Select a model, install the dependencies for it and set your API keys as environment variables. Weβll use Mistral as an example below:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai zod
yarn add @langchain/mistralai zod
pnpm add @langchain/mistralai zod
You can also see [this page](/v0.1/docs/integrations/chat/) for an overview of which models support different kinds of structured output.
The Schema[β](#the-schema "Direct link to The Schema")
------------------------------------------------------
First, we need to describe what information we want to extract from the text.
For convenience, weβll use Zod to define an example schema to extract personal information. You may also use JSON schema directly if you wish.
import { z } from "zod";// Note that:// 1. Each field is `optional` -- this allows the model to decline to extract it!// 2. Each field uses the `.describe()` method -- this description is used by the LLM.// Having a good description can help improve extraction results.const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");
There are two best practices when defining schema:
1. Document the **attributes** and the **schema** itself: This information is sent to the LLM and is used to improve the quality of information extraction.
2. Do not force the LLM to make up information! Above we used `Optional` for the attributes allowing the LLM to output `None` if it doesnβt know the answer.
info
For best performance, document the schema well and make sure the model isnβt force to return results if thereβs no information to be extracted in the text.
The Extractor[β](#the-extractor "Direct link to The Extractor")
---------------------------------------------------------------
Letβs create an information extractor using the schema we defined above.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // Please see the how-to about improving performance with // reference examples. // new MessagesPlaceholder("examples"), ["human", "{text}"],]);
We need to use a model that supports function/tool calling.
Please review [the chat model integration page](/v0.1/docs/integrations/chat/) for list of some models that can be used with this API.
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0,});const extractionRunnable = prompt.pipe(llm.withStructuredOutput(personSchema));
Letβs test it out!
const text = "Alan Smith is 6 feet tall and has blond hair.";await extractionRunnable.invoke({ text });
{ name: "Alan Smith", height_in_meters: "1.8288", hair_color: "blond" }
info
Extraction is Generative π€―
LLMs are generative models, so they can do some pretty cool things like correctly extract the height of the person in meters even though it was provided in feet!
Multiple Entities[β](#multiple-entities "Direct link to Multiple Entities")
---------------------------------------------------------------------------
In **most cases**, you should be extracting a list of entities rather than a single entity.
This can be easily achieved with Zod by nesting models inside one another. Hereβs an example using the `personSchema` we defined above:
const dataSchema = z.object({ people: z.array(personSchema),});
info
Extraction might not be perfect here. Please continue to see how to use **Reference Examples** to improve the quality of extraction, and see the **guidelines** section!
const extractionRunnable = prompt.pipe(llm.withStructuredOutput(dataSchema));const text = "My name is Jeff, my hair is black and i am 6 feet tall. Anna has the same color hair as me.";await extractionRunnable.invoke({ text });
{ people: [ { name: "Jeff", hair_color: "black", height_in_meters: "1.8288" }, { name: "Anna", hair_color: "black" } ]}
tip
When the schema accommodates the extraction of **multiple entities**, it also allows the model to extract **no entities** if no relevant information is in the text by providing an empty list.
This is usually a **good** thing! It allows specifying **required** attributes on an entity without necessarily forcing the model to detect this entity.
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
Now that you understand the basics of extraction with LangChain, youβre ready to proceed to the rest of the how-to guide:
* [Add Examples](/v0.1/docs/use_cases/extraction/how_to/examples/): Learn how to use **reference examples** to improve performance.
* [Handle Long Text](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/): What should you do if the text does not fit into the context window of the LLM?
* [Handle Files](/v0.1/docs/use_cases/extraction/how_to/handle_files/): Examples of using LangChain document loaders and parsers to extract from files like PDFs.
* [Without function calling](/v0.1/docs/use_cases/extraction/how_to/parse/): Use a prompt based approach to extract with models that do not support **tool/function calling**.
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/): Guidelines for getting good performance on extraction tasks.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Extraction
](/v0.1/docs/use_cases/extraction/)[
Next
Use reference examples
](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Set up](#set-up)
* [The Schema](#the-schema)
* [The Extractor](#the-extractor)
* [Multiple Entities](#multiple-entities)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/extraction/how_to/examples/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Quickstart](/v0.1/docs/use_cases/extraction/quickstart/)
* [How-To Guides](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Use reference examples](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Handle long text](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/)
* [Handle Files](/v0.1/docs/use_cases/extraction/how_to/handle_files/)
* [Without function calling](/v0.1/docs/use_cases/extraction/how_to/parse/)
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* How-To Guides
* Use reference examples
On this page
Use reference examples
======================
The quality of extractions can often be improved by providing reference examples to the LLM.
tip
While this tutorial focuses how to use examples with a tool calling model, this technique is generally applicable, and will work also with JSON more or prompt based techniques.
Weβll use OpenAIβs GPT-4 this time for their robust support for `ToolMessages`:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai zod uuid
yarn add @langchain/openai zod uuid
pnpm add @langchain/openai zod uuid
Letβs define a prompt:
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // ββββββββββββββββββββββββββββ new MessagesPlaceholder("examples"), // βββββββββββββββββββββββββββββ ["human", "{text}"],]);
Test out the template:
import { HumanMessage } from "@langchain/core/messages";const promptValue = await prompt.invoke({ text: "this is some text", examples: [new HumanMessage("testing 1 2 3")],});promptValue.toChatMessages();
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "testing 1 2 3", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "testing 1 2 3", name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "this is some text", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "this is some text", name: undefined, additional_kwargs: {} }]
Define the schema[β](#define-the-schema "Direct link to Define the schema")
---------------------------------------------------------------------------
Letβs re-use the people schema from the quickstart.
import { z } from "zod";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),});
Define reference examples[β](#define-reference-examples "Direct link to Define reference examples")
---------------------------------------------------------------------------------------------------
Examples can be defined as a list of input-output pairs.
Each example contains an example `input` text and an example `output` showing what should be extracted from the text.
info
The below example is a bit more advanced - the format of the example needs to match the API used (e.g., tool calling or JSON mode etc.).
Here, the formatted examples will match the format expected for the OpenAI tool calling API since thatβs what weβre using.
To provide reference examples to the model, we will mock out a fake chat history containing successful usages of the given tool. Because the model can choose to call multiple tools at once (or the same tool multiple times), the exampleβs outputs are an array:
import { AIMessage, type BaseMessage, HumanMessage, ToolMessage,} from "@langchain/core/messages";import { v4 as uuid } from "uuid";type OpenAIToolCall = { id: string; type: "function"; function: { name: string; arguments: string; };};type Example = { input: string; toolCallOutputs: Record<string, any>[];};/** * This function converts an example into a list of messages that can be fed into an LLM. * * This code serves as an adapter that transforms our example into a list of messages * that can be processed by a chat model. * * The list of messages for each example includes: * * 1) HumanMessage: This contains the content from which information should be extracted. * 2) AIMessage: This contains the information extracted by the model. * 3) ToolMessage: This provides confirmation to the model that the tool was requested correctly. * * The inclusion of ToolMessage is necessary because some chat models are highly optimized for agents, * making them less suitable for an extraction use case. */function toolExampleToMessages(example: Example): BaseMessage[] { const openAIToolCalls: OpenAIToolCall[] = example.toolCallOutputs.map( (output) => { return { id: uuid(), type: "function", function: { // The name of the function right now corresponds // to the passed name. name: "extract", arguments: JSON.stringify(output), }, }; } ); const messages: BaseMessage[] = [ new HumanMessage(example.input), new AIMessage({ content: "", additional_kwargs: { tool_calls: openAIToolCalls }, }), ]; const toolMessages = openAIToolCalls.map((toolCall, i) => { // Return the mocked successful result for a given tool call. return new ToolMessage({ content: "You have correctly called this tool.", tool_call_id: toolCall.id, }); }); return messages.concat(toolMessages);}
Next letβs define our examples and then convert them into message format.
const examples: Example[] = [ { input: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", toolCallOutputs: [{}], }, { input: "Fiona traveled far from France to Spain.", toolCallOutputs: [ { name: "Fiona", }, ], },];const exampleMessages = [];for (const example of examples) { exampleMessages.push(...toolExampleToMessages(example));}
6
Letβs test out the prompt
const promptValue = await prompt.invoke({ text: "this is some text", examples: exampleMessages,});promptValue.toChatMessages();
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You are an expert extraction algorithm.\n" + "Only extract relevant information from the text.\n" + "If you do n"... 87 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { tool_calls: [ { id: "8fa4d00d-801f-470e-8737-51ee9dc82259", type: "function", function: [Object] } ] } }, ToolMessage { lc_serializable: true, lc_kwargs: { content: "You have correctly called this tool.", tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You have correctly called this tool.", name: undefined, additional_kwargs: {}, tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259" }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Fiona traveled far from France to Spain.", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Fiona traveled far from France to Spain.", name: undefined, additional_kwargs: {} }, AIMessage { lc_serializable: true, lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } }, lc_namespace: [ "langchain_core", "messages" ], content: "", name: undefined, additional_kwargs: { tool_calls: [ { id: "14ad6217-fcbd-47c7-9006-82f612e36c66", type: "function", function: [Object] } ] } }, ToolMessage { lc_serializable: true, lc_kwargs: { content: "You have correctly called this tool.", tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "You have correctly called this tool.", name: undefined, additional_kwargs: {}, tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66" }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "this is some text", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "this is some text", name: undefined, additional_kwargs: {} }]
Create an extractor[β](#create-an-extractor "Direct link to Create an extractor")
---------------------------------------------------------------------------------
Here, weβll create an extractor using **gpt-4**.
import { ChatOpenAI } from "@langchain/openai";// We will be using tool calling mode, which// requires a tool calling capable model.const llm = new ChatOpenAI({ // Consider benchmarking with the best model you can to get // a sense of the best possible quality. model: "gpt-4-0125-preview", temperature: 0,});// For function/tool calling, we can also supply an name for the schema// to give the LLM additional context about what it's extracting.const extractionRunnable = prompt.pipe( llm.withStructuredOutput(peopleSchema, { name: "people" }));
Without examples πΏ[β](#without-examples "Direct link to Without examples πΏ")
------------------------------------------------------------------------------
Notice that even though weβre using `gpt-4`, itβs unreliable with a **very simple** test case!
We run it 5 times below to emphasize this:
const text = "The solar system is large, but earth has only 1 moon.";for (let i = 0; i < 5; i++) { const result = await extractionRunnable.invoke({ text, examples: [], }); console.log(result);}
{ people: [ { name: "earth", hair_color: "grey", height_in_meters: "1" } ]}{ people: [ { name: "earth", hair_color: "moon" } ] }{ people: [ { name: "earth", hair_color: "moon" } ] }{ people: [ { name: "earth", hair_color: "1 moon" } ] }{ people: [] }
With examples π»[β](#with-examples "Direct link to With examples π»")
---------------------------------------------------------------------
Reference examples help fix the failure!
const text = "The solar system is large, but earth has only 1 moon.";for (let i = 0; i < 5; i++) { const result = await extractionRunnable.invoke({ text, // Example messages from above examples: exampleMessages, }); console.log(result);}
{ people: [] }{ people: [] }{ people: [] }{ people: [] }{ people: [] }
await extractionRunnable.invoke({ text: "My name is Hair-ison. My hair is black. I am 3 meters tall.", examples: exampleMessages,});
{ people: [ { name: "Hair-ison", hair_color: "black", height_in_meters: "3" } ]}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/use_cases/extraction/quickstart/)[
Next
Handle long text
](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/)
* [Define the schema](#define-the-schema)
* [Define reference examples](#define-reference-examples)
* [Create an extractor](#create-an-extractor)
* [Without examples πΏ](#without-examples)
* [With examples π»](#with-examples)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/extraction/how_to/handle_long_text/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Quickstart](/v0.1/docs/use_cases/extraction/quickstart/)
* [How-To Guides](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Use reference examples](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Handle long text](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/)
* [Handle Files](/v0.1/docs/use_cases/extraction/how_to/handle_files/)
* [Without function calling](/v0.1/docs/use_cases/extraction/how_to/parse/)
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* How-To Guides
* Handle long text
On this page
Handle long text
================
When working with files, like PDFs, youβre likely to encounter text that exceeds your language modelβs context window. To process this text, consider these strategies:
1. **Change LLM** Choose a different LLM that supports a larger context window.
2. **Brute Force** Chunk the document, and extract content from each chunk.
3. **RAG** Chunk the document, index the chunks, and only extract content from a subset of chunks that look βrelevantβ.
Keep in mind that these strategies have different trade offs and the best strategy likely depends on the application that youβre designing!
Set up[β](#set-up "Direct link to Set up")
------------------------------------------
First, letβs install some required dependencies:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai zod cheerio
yarn add @langchain/openai zod cheerio
pnpm add @langchain/openai zod cheerio
Next, we need some example data! Letβs download an article about [cars from Wikipedia](https://en.wikipedia.org/wiki/Car) and load it as a LangChain `Document`.
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";// Only required in a Deno notebook environment to load the peer dep.import "cheerio";const loader = new CheerioWebBaseLoader("https://en.wikipedia.org/wiki/Car");const docs = await loader.load();
[Module: null prototype] { contains: [Function: contains], default: [Function: initialize] { contains: [Function: contains], html: [Function: html], merge: [Function: merge], parseHTML: [Function: parseHTML], root: [Function: root], text: [Function: text], xml: [Function: xml], load: [Function: load], _root: Document { parent: null, prev: null, next: null, startIndex: null, endIndex: null, children: [], type: "root" }, _options: { xml: false, decodeEntities: true }, fn: Cheerio {} }, html: [Function: html], load: [Function: load], merge: [Function: merge], parseHTML: [Function: parseHTML], root: [Function: root], text: [Function: text], xml: [Function: xml]}
docs[0].pageContent.length;
95865
Define the schema[β](#define-the-schema "Direct link to Define the schema")
---------------------------------------------------------------------------
Here, weβll define schema to extract key developments from the text.
import { z } from "zod";import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const keyDevelopmentSchema = z .object({ year: z .number() .describe("The year when there was an important historic development."), description: z .string() .describe("What happened in this year? What was the development?"), evidence: z .string() .describe( "Repeat verbatim the sentence(s) from which the year and description information were extracted" ), }) .describe("Information about a development in the history of cars.");const extractionDataSchema = z .object({ key_developments: z.array(keyDevelopmentSchema), }) .describe( "Extracted information about key developments in the history of cars" );const SYSTEM_PROMPT_TEMPLATE = [ "You are an expert at identifying key historic development in text.", "Only extract important historic developments. Extract nothing if no important information can be found in the text.",].join("\n");// Define a custom prompt to provide instructions and any additional context.// 1) You can add examples into the prompt template to improve extraction quality// 2) Introduce additional parameters to take context into account (e.g., include metadata// about the document from which the text was extracted.)const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], // Keep on reading through this use case to see how to use examples to improve performance // MessagesPlaceholder('examples'), ["human", "{text}"],]);// We will be using tool calling mode, which// requires a tool calling capable model.const llm = new ChatOpenAI({ model: "gpt-4-0125-preview", temperature: 0,});const extractionChain = prompt.pipe( llm.withStructuredOutput(extractionDataSchema));
Brute force approach[β](#brute-force-approach "Direct link to Brute force approach")
------------------------------------------------------------------------------------
Split the documents into chunks such that each chunk fits into the context window of the LLMs.
import { TokenTextSplitter } from "langchain/text_splitter";const textSplitter = new TokenTextSplitter({ chunkSize: 2000, chunkOverlap: 20,});// Note that this method takes an array of docsconst splitDocs = await textSplitter.splitDocuments(docs);
Use the `.batch` method present on all runnables to run the extraction in **parallel** across each chunk!
tip
You can often use `.batch()` to parallelize the extractions!
If your model is exposed via an API, this will likely speed up your extraction flow.
// Limit just to the first 3 chunks// so the code can be re-run quicklyconst firstFewTexts = splitDocs.slice(0, 3).map((doc) => doc.pageContent);const extractionChainParams = firstFewTexts.map((text) => { return { text };});const results = await extractionChain.batch(extractionChainParams, { maxConcurrency: 5,});
### Merge results[β](#merge-results "Direct link to Merge results")
After extracting data from across the chunks, weβll want to merge the extractions together.
const keyDevelopments = results.flatMap((result) => result.key_developments);keyDevelopments.slice(0, 20);
[ { year: 0, description: "", evidence: "" }, { year: 1769, description: "French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle.", evidence: "French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769." }, { year: 1808, description: "French-born Swiss inventor FranΓ§ois Isaac de Rivaz designed and constructed the first internal combu"... 25 more characters, evidence: "French-born Swiss inventor FranΓ§ois Isaac de Rivaz designed and constructed the first internal combu"... 33 more characters }, { year: 1886, description: "German inventor Carl Benz patented his Benz Patent-Motorwagen, inventing the modern carβa practical,"... 40 more characters, evidence: "The modern carβa practical, marketable automobile for everyday useβwas invented in 1886, when German"... 56 more characters }, { year: 1908, description: "The 1908 Model T, an American car manufactured by the Ford Motor Company, became one of the first ca"... 28 more characters, evidence: "One of the first cars affordable by the masses was the 1908 Model T, an American car manufactured by"... 24 more characters }]
RAG based approach[β](#rag-based-approach "Direct link to RAG based approach")
------------------------------------------------------------------------------
Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the the most relevant chunks.
caution
It can be difficult to identify which chunks are relevant.
For example, in the `car` article weβre using here, most of the article contains key development information. So by using **RAG**, weβll likely be throwing out a lot of relevant information.
We suggest experimenting with your use case and determining whether this approach works or not.
Hereβs a simple example that relies on an in-memory demo `MemoryVectorStore` vectorstore.
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";// Only load the first 10 docs for speed in this demo use-caseconst vectorstore = await MemoryVectorStore.fromDocuments( splitDocs.slice(0, 10), new OpenAIEmbeddings());// Only extract from top documentconst retriever = vectorstore.asRetriever({ k: 1 });
In this case the RAG extractor is only looking at the top document.
import { RunnableSequence } from "@langchain/core/runnables";const ragExtractor = RunnableSequence.from([ { text: retriever.pipe((docs) => docs[0].pageContent), }, extractionChain,]);
const results = await ragExtractor.invoke( "Key developments associated with cars");
results.key_developments;
[ { year: 2020, description: "The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 million km (1."... 33 more characters, evidence: "The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 millionkm (1.2"... 31 more characters }, { year: 2030, description: "All fossil fuel vehicles will be banned in Amsterdam from 2030.", evidence: "all fossil fuel vehicles will be banned in Amsterdam from 2030." }, { year: 2020, description: "In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year.", evidence: "In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year." }]
Common issues[β](#common-issues "Direct link to Common issues")
---------------------------------------------------------------
Different methods have their own pros and cons related to cost, speed, and accuracy.
Watch out for these issues:
* Chunking content means that the LLM can fail to extract information if the information is spread across multiple chunks.
* Large chunk overlap may cause the same information to be extracted twice, so be prepared to de-duplicate!
* LLMs can make up data. If looking for a single fact across a large text and using a brute force approach, you may end up getting more made up data.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Use reference examples
](/v0.1/docs/use_cases/extraction/how_to/examples/)[
Next
Handle Files
](/v0.1/docs/use_cases/extraction/how_to/handle_files/)
* [Set up](#set-up)
* [Define the schema](#define-the-schema)
* [Brute force approach](#brute-force-approach)
* [Merge results](#merge-results)
* [RAG based approach](#rag-based-approach)
* [Common issues](#common-issues)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/extraction/guidelines/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Quickstart](/v0.1/docs/use_cases/extraction/quickstart/)
* [How-To Guides](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* Guidelines
On this page
Guidelines
==========
The quality of extraction results depends on many factors.
Here is a set of guidelines to help you squeeze out the best performance from your models:
* Set the model temperature to `0`.
* Improve the prompt. The prompt should be precise and to the point.
* Document the schema: Make sure the schema is documented to provide more information to the LLM.
* Provide reference examples! Diverse examples can help, including examples where nothing should be extracted.
* If you have a lot of examples, use a retriever to retrieve the most relevant examples.
* Benchmark with the best available LLM/Chat Model (e.g., claude-3, gpt-4, etc) β check with the model provider which one is the latest and greatest!
* If the schema is very large, try breaking it into multiple smaller schemas, run separate extractions and merge the results.
* Make sure that the schema allows the model to REJECT extracting information. If it doesnβt, the model will be forced to make up information!
* Add verification/correction steps (ask an LLM to correct or verify the results of the extraction).
Benchmark[β](#benchmark "Direct link to Benchmark")
---------------------------------------------------
* Create and benchmark data for your use case using [LangSmith π¦οΈπ οΈ](https://docs.smith.langchain.com/).
* Is your LLM good enough? Use [langchain-benchmarks π¦π―](https://github.com/langchain-ai/langchain-benchmarks) to test out your LLM using existing datasets.
Keep in mind! πΆβπ«οΈ[β](#keep-in-mind "Direct link to Keep in mind! πΆβπ«οΈ")
----------------------------------------------------------------------------
* LLMs are great, but are not required for all cases! If youβre extracting information from a single structured source (e.g., linkedin), using an LLM is not a good idea β traditional web-scraping will be much cheaper and reliable.
* **human in the loop** If you need **perfect quality**, youβll likely need to plan on having a human in the loop β even the best LLMs will make mistakes when dealing with complex extraction tasks.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Without function calling
](/v0.1/docs/use_cases/extraction/how_to/parse/)[
Next
Query analysis
](/v0.1/docs/use_cases/query_analysis/)
* [Benchmark](#benchmark)
* [Keep in mind! πΆβπ«οΈ](#keep-in-mind)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/custom/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* Custom output parsers
On this page
Custom output parsers
=====================
If there is a custom format you want to transform a modelβs output into, you can subclass and create your own output parser.
The simplest kind of output parser extends the [`BaseOutputParser<T>` class](https://api.js.langchain.com/classes/langchain_core_output_parsers.BaseOutputParser.html) and must implement the following methods:
* `parse`, which takes extracted string output from the model and returns an instance of `T`.
* `getFormatInstructions`, which returns formatting instructions to pass to the modelβs prompt to encourage output in the correct format.
The `parse` method should also throw a special type of error called an [`OutputParserException`](https://api.js.langchain.com/classes/langchain_core_output_parsers.OutputParserException.html) if the LLM output is badly formatted, which will trigger special retry behavior in other modules.
Here is a simplified example that expects the LLM to output a JSON object with specific named properties:
import { BaseOutputParser, OutputParserException,} from "@langchain/core/output_parsers";export interface CustomOutputParserFields {}// This can be more generic, like Record<string, string>export type ExpectedOutput = { greeting: string;};export class CustomOutputParser extends BaseOutputParser<ExpectedOutput> { lc_namespace = ["langchain", "output_parsers"]; constructor(fields?: CustomOutputParserFields) { super(fields); } async parse(llmOutput: string): Promise<ExpectedOutput> { let parsedText; try { parsedText = JSON.parse(llmOutput); } catch (e) { throw new OutputParserException( `Failed to parse. Text: "${llmOutput}". Error: ${e.message}` ); } if (parsedText.greeting === undefined) { throw new OutputParserException( `Failed to parse. Text: "${llmOutput}". Error: Missing "greeting" key.` ); } if (Object.keys(parsedText).length !== 1) { throw new OutputParserException( `Failed to parse. Text: "${llmOutput}". Error: Expected one and only one key named "greeting".` ); } return parsedText; } getFormatInstructions(): string { return `Your response must be a JSON object with a single key called "greeting" with a single string value. Do not return anything else.`; }}
Then, we can use it with an LLM like this:
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const template = `Answer the following user question to the best of your ability:{format_instructions}{question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({});const outputParser = new CustomOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const result = await chain.invoke({ question: "how are you?", format_instructions: outputParser.getFormatInstructions(),});console.log(typeof result);console.log(result);
object{ greeting: "I am an AI assistant programmed to provide information and assist with tasks. How can I help you tod"... 3 more characters}
Parsing raw model outputs[β](#parsing-raw-model-outputs "Direct link to Parsing raw model outputs")
---------------------------------------------------------------------------------------------------
Sometimes there is additional metadata on the model output that is important besides the raw text. One example of this is function calling, where arguments intended to be passed to called functions are returned in a separate property. If you need this finer-grained control, you can instead subclass the [`BaseLLMOutputParser<T>` class](https://api.js.langchain.com/classes/langchain_core_output_parsers.BaseLLMOutputParser.html). This class requires a single method:
* `parseResult`, which takes a [`Generation[]`](https://api.js.langchain.com/interfaces/langchain_core_outputs.Generation.html) or a [`ChatGeneration[]`](https://api.js.langchain.com/interfaces/langchain_core_outputs.ChatGeneration.html) as a parameter. This is because output parsers generally work with both chat models and LLMs, and therefore must be able to handle both types of outputs.
The `getFormatInstructions` method is not required for this class. Hereβs an example of the above output parser rewritten in this style:
import { BaseLLMOutputParser, OutputParserException,} from "@langchain/core/output_parsers";import { ChatGeneration, Generation } from "@langchain/core/outputs";export interface CustomOutputParserFields {}// This can be more generic, like Record<string, string>export type ExpectedOutput = { greeting: string;};function isChatGeneration( llmOutput: ChatGeneration | Generation): llmOutput is ChatGeneration { return "message" in llmOutput;}export class CustomLLMOutputParser extends BaseLLMOutputParser<ExpectedOutput> { lc_namespace = ["langchain", "output_parsers"]; constructor(fields?: CustomOutputParserFields) { super(fields); } async parseResult( llmOutputs: ChatGeneration[] | Generation[] ): Promise<ExpectedOutput> { if (!llmOutputs.length) { throw new OutputParserException( "Output parser did not receive any generations." ); } let parsedOutput; // There is a standard `text` property as well on both types of Generation if (isChatGeneration(llmOutputs[0])) { parsedOutput = llmOutputs[0].message.content; } else { parsedOutput = llmOutputs[0].text; } let parsedText; try { parsedText = JSON.parse(parsedOutput); } catch (e) { throw new OutputParserException( `Failed to parse. Text: "${parsedOutput}". Error: ${e.message}` ); } if (parsedText.greeting === undefined) { throw new OutputParserException( `Failed to parse. Text: "${parsedOutput}". Error: Missing "greeting" key.` ); } if (Object.keys(parsedText).length !== 1) { throw new OutputParserException( `Failed to parse. Text: "${parsedOutput}". Error: Expected one and only one key named "greeting".` ); } return parsedText; }}
const template = `Answer the following user question to the best of your ability:Your response must be a JSON object with a single key called "greeting" with a single string value. Do not return anything else.{question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({});const outputParser = new CustomLLMOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const result = await chain.invoke({ question: "how are you?",});console.log(typeof result);console.log(result);
object{ greeting: "I'm an AI assistant, I don't have feelings but thank you for asking!"}
Streaming[β](#streaming "Direct link to Streaming")
---------------------------------------------------
The above parser will work well for parsing fully aggregated model outputs, but will cause `.stream()` to return a single chunk rather than emitting them as the model generates them:
const stream = await chain.stream({ question: "how are you?",});for await (const chunk of stream) { console.log(chunk);}
{ greeting: "I'm an AI assistant, so I don't feel emotions but I'm here to help you."}
This makes sense in some scenarios where we need to wait for the LLM to finish generating before parsing the output, but supporting preemptive parsing when possible creates nicer downstream user experiences. A simple example is automatically transforming streamed output into bytes as it is generated for use in HTTP responses.
The base class in this case is [`BaseTransformOutputParser`](https://api.js.langchain.com/classes/langchain_core_output_parsers.BaseTransformOutputParser.html), which itself extends `BaseOutputParser`. As before, youβll need to implement the `parse` method, but this time itβs a bit trickier since each `parse` invocation needs to potentially handle a chunk of output rather than the whole thing. Hereβs a simple example:
import { BaseTransformOutputParser } from "@langchain/core/output_parsers";export class CustomTransformOutputParser extends BaseTransformOutputParser<Uint8Array> { lc_namespace = ["langchain", "output_parsers"]; protected textEncoder = new TextEncoder(); async parse(text: string): Promise<Uint8Array> { return this.textEncoder.encode(text); } getFormatInstructions(): string { return ""; }}
const template = `Answer the following user question to the best of your ability:{question}`;const prompt = ChatPromptTemplate.fromTemplate(template);const model = new ChatOpenAI({});const outputParser = new CustomTransformOutputParser();const chain = prompt.pipe(model).pipe(outputParser);const stream = await chain.stream({ question: "how are you?",});for await (const chunk of stream) { console.log(chunk);}
Uint8Array(0) []Uint8Array(2) [ 65, 115 ]Uint8Array(3) [ 32, 97, 110 ]Uint8Array(3) [ 32, 65, 73 ]Uint8Array(1) [ 44 ]Uint8Array(2) [ 32, 73 ]Uint8Array(4) [ 32, 100, 111, 110 ]Uint8Array(2) [ 39, 116 ]Uint8Array(5) [ 32, 104, 97, 118, 101 ]Uint8Array(9) [ 32, 102, 101, 101, 108, 105, 110, 103, 115]Uint8Array(3) [ 32, 111, 114 ]Uint8Array(9) [ 32, 101, 109, 111, 116, 105, 111, 110, 115]Uint8Array(1) [ 44 ]Uint8Array(3) [ 32, 115, 111 ]Uint8Array(2) [ 32, 73 ]Uint8Array(4) [ 32, 100, 111, 110 ]Uint8Array(2) [ 39, 116 ]Uint8Array(11) [ 32, 101, 120, 112, 101, 114, 105, 101, 110, 99, 101]Uint8Array(4) [ 32, 116, 104, 101 ]Uint8Array(5) [ 32, 115, 97, 109, 101 ]Uint8Array(4) [ 32, 119, 97, 121 ]Uint8Array(7) [ 32, 104, 117, 109, 97, 110, 115]Uint8Array(3) [ 32, 100, 111 ]Uint8Array(1) [ 46 ]Uint8Array(8) [ 32, 72, 111, 119, 101, 118, 101, 114]Uint8Array(1) [ 44 ]Uint8Array(2) [ 32, 73 ]Uint8Array(2) [ 39, 109 ]Uint8Array(5) [ 32, 104, 101, 114, 101 ]Uint8Array(3) [ 32, 116, 111 ]Uint8Array(5) [ 32, 104, 101, 108, 112 ]Uint8Array(4) [ 32, 121, 111, 117 ]Uint8Array(5) [ 32, 119, 105, 116, 104 ]Uint8Array(4) [ 32, 97, 110, 121 ]Uint8Array(10) [ 32, 113, 117, 101, 115, 116, 105, 111, 110, 115]Uint8Array(3) [ 32, 111, 114 ]Uint8Array(6) [ 32, 116, 97, 115, 107, 115 ]Uint8Array(4) [ 32, 121, 111, 117 ]Uint8Array(5) [ 32, 104, 97, 118, 101 ]Uint8Array(1) [ 33 ]Uint8Array(4) [ 32, 72, 111, 119 ]Uint8Array(4) [ 32, 99, 97, 110 ]Uint8Array(2) [ 32, 73 ]Uint8Array(7) [ 32, 97, 115, 115, 105, 115, 116]Uint8Array(4) [ 32, 121, 111, 117 ]Uint8Array(6) [ 32, 116, 111, 100, 97, 121 ]Uint8Array(1) [ 63 ]Uint8Array(0) []
For more examples, see some of the implementations [in @langchain/core](https://github.com/langchain-ai/langchainjs/tree/main/langchain-core/src/output_parsers).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quick Start
](/v0.1/docs/modules/model_io/output_parsers/quick_start/)[
Next
Output Parser Types
](/v0.1/docs/modules/model_io/output_parsers/types/)
* [Parsing raw model outputs](#parsing-raw-model-outputs)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/extraction/how_to/handle_files/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Quickstart](/v0.1/docs/use_cases/extraction/quickstart/)
* [How-To Guides](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Use reference examples](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Handle long text](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/)
* [Handle Files](/v0.1/docs/use_cases/extraction/how_to/handle_files/)
* [Without function calling](/v0.1/docs/use_cases/extraction/how_to/parse/)
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* How-To Guides
* Handle Files
Handle Files
============
Besides raw text data, you may wish to extract information from other file types such as PowerPoint presentations or PDFs.
The general strategy is to use a LangChain [document loader](/v0.1/docs/modules/data_connection/document_loaders/) or other method to parse files into a text format that can be fed into LLMs.
LangChain features a large number of [document loader integrations](/v0.1/docs/integrations/document_loaders/).
Letβs go over an example of loading and extracting data from a PDF. First, we install required dependencies:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai zod
yarn add @langchain/openai zod
pnpm add @langchain/openai zod
import { PDFLoader } from "langchain/document_loaders/fs/pdf";// Only required in a Deno notebook environment to load the peer dep.import "pdf-parse";const loader = new PDFLoader("./test/data/bitcoin.pdf");const docs = await loader.load();
[Module: null prototype] { default: [AsyncFunction: PDF] }
Now that weβve loaded a PDF document, letβs try extracting mentioned people. We can define a schema like this:
import { z } from "zod";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), email: z.optional(z.string()).describe("The person's email, if present"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),});
And then initialize our extraction chain like this:
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.Only extract relevant information from the text.If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], ["human", "{text}"],]);const llm = new ChatOpenAI({ model: "gpt-4-0125-preview", temperature: 0,});const extractionRunnable = prompt.pipe( llm.withStructuredOutput(peopleSchema, { name: "people" }));
Now, letβs try invoking it!
await extractionRunnable.invoke({ text: docs[0].pageContent });
{ people: [ { name: "Satoshi Nakamoto", email: "satoshin@gmx.com" } ] }
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Handle long text
](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/)[
Next
Without function calling
](/v0.1/docs/use_cases/extraction/how_to/parse/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/extraction/how_to/parse/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Quickstart](/v0.1/docs/use_cases/extraction/quickstart/)
* [How-To Guides](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Use reference examples](/v0.1/docs/use_cases/extraction/how_to/examples/)
* [Handle long text](/v0.1/docs/use_cases/extraction/how_to/handle_long_text/)
* [Handle Files](/v0.1/docs/use_cases/extraction/how_to/handle_files/)
* [Without function calling](/v0.1/docs/use_cases/extraction/how_to/parse/)
* [Guidelines](/v0.1/docs/use_cases/extraction/guidelines/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* How-To Guides
* Without function calling
On this page
Without function calling
========================
LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format without using function calling.
This approach relies on designing good prompts and then parsing the output of the LLMs to make them extract information well, though it lacks some of the guarantees provided by function calling or JSON mode.
Here, weβll use Claude which is great at following instructions! See [here for more about Anthropic models](/v0.1/docs/integrations/chat/anthropic/).
First, weβll install the integration package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic zod zod-to-json-schema
yarn add @langchain/anthropic zod zod-to-json-schema
pnpm add @langchain/anthropic zod zod-to-json-schema
import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0,});
tip
All the same considerations for extraction quality apply for parsing approach. Review the [guidelines](/v0.1/docs/use_cases/extraction/guidelines/) for extraction quality.
This tutorial is meant to be simple, but generally should really include reference examples to squeeze out performance!
Using StructuredOutputParser[β](#using-structuredoutputparser "Direct link to Using StructuredOutputParser")
------------------------------------------------------------------------------------------------------------
The following example uses the built-in [`StructuredOutputParser`](/v0.1/docs/modules/model_io/output_parsers/types/structured/) to parse the output of a chat model. We use the built-in prompt formatting instructions contained in the parser.
import { z } from "zod";import { StructuredOutputParser } from "langchain/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const parser = StructuredOutputParser.fromZodSchema(personSchema);const prompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user query. Wrap the output in `json` tags\n{format_instructions}", ], ["human", "{query}"],]);const partialedPrompt = await prompt.partial({ format_instructions: parser.getFormatInstructions(),});
Letβs take a look at what information is sent to the model
const query = "Anna is 23 years old and she is 6 feet tall";
const promptValue = await partialedPrompt.invoke({ query });console.log(promptValue.toChatMessages());
[ SystemMessage { lc_serializable: true, lc_kwargs: { content: "Answer the user query. Wrap the output in `json` tags\n" + "You must format your output as a JSON value th"... 1444 more characters, additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Answer the user query. Wrap the output in `json` tags\n" + "You must format your output as a JSON value th"... 1444 more characters, name: undefined, additional_kwargs: {} }, HumanMessage { lc_serializable: true, lc_kwargs: { content: "Anna is 23 years old and she is 6 feet tall", additional_kwargs: {} }, lc_namespace: [ "langchain_core", "messages" ], content: "Anna is 23 years old and she is 6 feet tall", name: undefined, additional_kwargs: {} }]
const chain = partialedPrompt.pipe(model).pipe(parser);await chain.invoke({ query });
{ name: "Anna", hair_color: "", height_in_meters: "1.83" }
Custom Parsing[β](#custom-parsing "Direct link to Custom Parsing")
------------------------------------------------------------------
You can also create a custom prompt and parser with `LangChain` and `LCEL`.
You can use a raw function to parse the output from the model.
In the below example, weβll pass the schema into the prompt as JSON schema. For convenience, weβll declare our schema with Zod, then use the [`zod-to-json-schema`](https://github.com/StefanTerdell/zod-to-json-schema) utility to convert it to JSON schema.
import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";const personSchema = z .object({ name: z.optional(z.string()).describe("The name of the person"), hair_color: z .optional(z.string()) .describe("The color of the person's hair, if known"), height_in_meters: z .optional(z.string()) .describe("Height measured in meters"), }) .describe("Information about a person.");const peopleSchema = z.object({ people: z.array(personSchema),});const SYSTEM_PROMPT_TEMPLATE = [ "Answer the user's query. You must return your answer as JSON that matches the given schema:", "```json\n{schema}\n```.", "Make sure to wrap the answer in ```json and ``` tags. Conform to the given schema exactly.",].join("\n");const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_PROMPT_TEMPLATE], ["human", "{query}"],]);const extractJsonFromOutput = (message) => { const text = message.content; // Define the regular expression pattern to match JSON blocks const pattern = /```json\s*((.|\n)*?)\s*```/gs; // Find all non-overlapping matches of the pattern in the string const matches = pattern.exec(text); if (matches && matches[1]) { try { return JSON.parse(matches[1].trim()); } catch (error) { throw new Error(`Failed to parse: ${matches[1]}`); } } else { throw new Error(`No JSON found in: ${message}`); }};
const query = "Anna is 23 years old and she is 6 feet tall";const promptValue = await prompt.invoke({ schema: zodToJsonSchema(peopleSchema), query,});promptValue.toString();
"System: Answer the user's query. You must return your answer as JSON that matches the given schema:\n"... 170 more characters
const chain = prompt.pipe(model).pipe(extractJsonFromOutput);await chain.invoke({ schema: zodToJsonSchema(peopleSchema), query,});
{ name: "Anna", age: 23, height: { feet: 6, inches: 0 } }
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Handle Files
](/v0.1/docs/use_cases/extraction/how_to/handle_files/)[
Next
Guidelines
](/v0.1/docs/use_cases/extraction/guidelines/)
* [Using StructuredOutputParser](#using-structuredoutputparser)
* [Custom Parsing](#custom-parsing)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/anthropic_tools/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Anthropic Tools
On this page
anthropic\_tools
================
danger
This API is deprecated as Anthropic now officially supports tools. [Click here to read the documentation](/v0.1/docs/integrations/chat/anthropic/#tools).
Anthropic Tools
===============
LangChain offers an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
To start, install the `@langchain/anthropic` integration package.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Initialize model[β](#initialize-model "Direct link to Initialize model")
------------------------------------------------------------------------
You can initialize this wrapper the same way you'd initialize a standard `ChatAnthropic` instance:
tip
We're unifying model params across all packages. We now suggest using `model` instead of `modelName`, and `apiKey` for API keys.
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";const model = new ChatAnthropicTools({ temperature: 0.1, model: "claude-3-sonnet-20240229", apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY});
Passing in tools[β](#passing-in-tools "Direct link to Passing in tools")
------------------------------------------------------------------------
You can now pass in tools the same way as OpenAI:
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { HumanMessage } from "@langchain/core/messages";const model = new ChatAnthropicTools({ temperature: 0.1, model: "claude-3-sonnet-20240229",}).bind({ tools: [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ], // You can set the `function_call` arg to force the model to use a function tool_choice: { type: "function", function: { name: "get_current_weather", }, },});const response = await model.invoke([ new HumanMessage({ content: "What's the weather in Boston?", }),]);console.log(response);/* AIMessage { lc_serializable: true, lc_kwargs: { content: '', additional_kwargs: { tool_calls: [Array] } }, lc_namespace: [ 'langchain_core', 'messages' ], content: '', name: undefined, additional_kwargs: { tool_calls: [ [Object] ] } }*/console.log(response.additional_kwargs.tool_calls);/* [ { id: '0', type: 'function', function: { name: 'get_current_weather', arguments: '{"location":"Boston, MA","unit":"fahrenheit"}' } } ]*/
#### API Reference:
* [ChatAnthropicTools](https://api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
Parallel tool calling[β](#parallel-tool-calling "Direct link to Parallel tool calling")
---------------------------------------------------------------------------------------
The model may choose to call multiple tools. Here is an example using an extraction use-case:
import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { JsonOutputToolsParser } from "langchain/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.Passage:{input}`;const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);// Use Zod for easier schema declarationconst schema = z.object({ name: z.string().describe("The name of a person"), height: z.number().describe("The person's height"), hairColor: z.optional(z.string()).describe("The person's hair color"),});const model = new ChatAnthropicTools({ temperature: 0.1, model: "claude-3-sonnet-20240229",}).bind({ tools: [ { type: "function", function: { name: "person", description: "Extracts the relevant people from the passage.", parameters: zodToJsonSchema(schema), }, }, ], // Can also set to "auto" to let the model choose a tool tool_choice: { type: "function", function: { name: "person", }, },});// Use a JsonOutputToolsParser to get the parsed JSON response directly.const chain = await prompt.pipe(model).pipe(new JsonOutputToolsParser());const response = await chain.invoke({ input: "Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.",});console.log(JSON.stringify(response, null, 2));/* [ { "type": "person", "args": { "name": "Alex", "height": 5, "hairColor": "blonde" } }, { "type": "person", "args": { "name": "Claudia", "height": 6, "hairColor": "brunette" } } ]*/
#### API Reference:
* [ChatAnthropicTools](https://api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental`
* [JsonOutputToolsParser](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputToolsParser.html) from `langchain/output_parsers`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
`.withStructuredOutput({ ... })`[β](#withstructuredoutput-- "Direct link to withstructuredoutput--")
----------------------------------------------------------------------------------------------------
info
The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change.
Using the `.withStructuredOutput` method, you can make the LLM return structured output, given only a Zod or JSON schema:
import { z } from "zod";import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { ChatPromptTemplate } from "@langchain/core/prompts";const calculatorSchema = z.object({ operation: z .enum(["add", "subtract", "multiply", "divide"]) .describe("The type of operation to execute"), number1: z.number().describe("The first number to operate on."), number2: z.number().describe("The second number to operate on."),});const model = new ChatAnthropicTools({ model: "claude-3-sonnet-20240229", temperature: 0.1,});// Pass the schema and tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorSchema);// You can also set force: false to allow the model scratchpad space.// This may improve reasoning capabilities.// const modelWithTool = model.withStructuredOutput(calculatorSchema, {// force: false,// });const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*/
#### API Reference:
* [ChatAnthropicTools](https://api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
### Using JSON schema:[β](#using-json-schema "Direct link to Using JSON schema:")
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";import { ChatPromptTemplate } from "@langchain/core/prompts";const calculatorJsonSchema = { type: "object", properties: { operation: { type: "string", enum: ["add", "subtract", "multiply", "divide"], description: "The type of operation to execute.", }, number1: { type: "number", description: "The first number to operate on." }, number2: { type: "number", description: "The second number to operate on.", }, }, required: ["operation", "number1", "number2"], description: "A simple calculator tool",};const model = new ChatAnthropicTools({ model: "claude-3-sonnet-20240229", temperature: 0.1,});// Pass the schema and optionally, the tool name to the withStructuredOutput methodconst modelWithTool = model.withStructuredOutput(calculatorJsonSchema, { name: "calculator",});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant who always needs to use a calculator.", ], ["human", "{input}"],]);// Chain your prompt and model togetherconst chain = prompt.pipe(modelWithTool);const response = await chain.invoke({ input: "What is 2 + 2?",});console.log(response);/* { operation: 'add', number1: 2, number2: 2 }*/
#### API Reference:
* [ChatAnthropicTools](https://api.js.langchain.com/classes/langchain_anthropic_experimental.ChatAnthropicTools.html) from `@langchain/anthropic/experimental`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Anthropic
](/v0.1/docs/integrations/chat/anthropic/)[
Next
Azure OpenAI
](/v0.1/docs/integrations/chat/azure/)
* [Setup](#setup)
* [Initialize model](#initialize-model)
* [Passing in tools](#passing-in-tools)
* [Parallel tool calling](#parallel-tool-calling)
* [`.withStructuredOutput({ ... })`](#withstructuredoutput--)
* [Using JSON schema:](#using-json-schema)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/chat/ollama/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Alibaba Tongyi](/v0.1/docs/integrations/chat/alibaba_tongyi/)
* [Anthropic](/v0.1/docs/integrations/chat/anthropic/)
* [Anthropic Tools](/v0.1/docs/integrations/chat/anthropic_tools/)
* [Azure OpenAI](/v0.1/docs/integrations/chat/azure/)
* [Baidu Wenxin](/v0.1/docs/integrations/chat/baidu_wenxin/)
* [Bedrock](/v0.1/docs/integrations/chat/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/chat/cloudflare_workersai/)
* [Cohere](/v0.1/docs/integrations/chat/cohere/)
* [Fake LLM](/v0.1/docs/integrations/chat/fake/)
* [Fireworks](/v0.1/docs/integrations/chat/fireworks/)
* [Friendli](/v0.1/docs/integrations/chat/friendli/)
* [Google GenAI](/v0.1/docs/integrations/chat/google_generativeai/)
* [(Legacy) Google PaLM/VertexAI](/v0.1/docs/integrations/chat/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/chat/google_vertex_ai/)
* [Groq](/v0.1/docs/integrations/chat/groq/)
* [Llama CPP](/v0.1/docs/integrations/chat/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/chat/minimax/)
* [Mistral AI](/v0.1/docs/integrations/chat/mistral/)
* [NIBittensorChatModel](/v0.1/docs/integrations/chat/ni_bittensor/)
* [Ollama](/v0.1/docs/integrations/chat/ollama/)
* [Ollama Functions](/v0.1/docs/integrations/chat/ollama_functions/)
* [OpenAI](/v0.1/docs/integrations/chat/openai/)
* [PremAI](/v0.1/docs/integrations/chat/premai/)
* [PromptLayer OpenAI](/v0.1/docs/integrations/chat/prompt_layer_openai/)
* [TogetherAI](/v0.1/docs/integrations/chat/togetherai/)
* [WebLLM](/v0.1/docs/integrations/chat/web_llm/)
* [YandexGPT](/v0.1/docs/integrations/chat/yandex/)
* [ZhipuAI](/v0.1/docs/integrations/chat/zhipuai/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Chat models](/v0.1/docs/integrations/chat/)
* Ollama
On this page
ChatOllama
==========
[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.
This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance as a chat model. For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { ChatOllama } from "@langchain/community/chat_models/ollama";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value});const stream = await model .pipe(new StringOutputParser()) .stream(`Translate "I love programming" into German.`);const chunks = [];for await (const chunk of stream) { chunks.push(chunk);}console.log(chunks.join(""));/* Thank you for your question! I'm happy to help. However, I must point out that the phrase "I love programming" is not grammatically correct in German. The word "love" does not have a direct translation in German, and it would be more appropriate to say "I enjoy programming" or "I am passionate about programming." In German, you can express your enthusiasm for something like this: * Ich mΓΆchte Programmieren (I want to program) * Ich mag Programmieren (I like to program) * Ich bin passioniert ΓΌber Programmieren (I am passionate about programming) I hope this helps! Let me know if you have any other questions.*/
#### API Reference:
* [ChatOllama](https://api.js.langchain.com/classes/langchain_community_chat_models_ollama.ChatOllama.html) from `@langchain/community/chat_models/ollama`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
JSON mode[β](#json-mode "Direct link to JSON mode")
---------------------------------------------------
Ollama also supports a JSON mode that coerces model outputs to only return JSON. Here's an example of how this can be useful for extraction:
import { ChatOllama } from "@langchain/community/chat_models/ollama";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ [ "system", `You are an expert translator. Format all responses as JSON objects with two keys: "original" and "translated".`, ], ["human", `Translate "{input}" into {language}.`],]);const model = new ChatOllama({ baseUrl: "http://localhost:11434", // Default value model: "llama2", // Default value format: "json",});const chain = prompt.pipe(model);const result = await chain.invoke({ input: "I love programming", language: "German",});console.log(result);/* AIMessage { content: '{"original": "I love programming", "translated": "Ich liebe das Programmieren"}', additional_kwargs: {} }*/
#### API Reference:
* [ChatOllama](https://api.js.langchain.com/classes/langchain_community_chat_models_ollama.ChatOllama.html) from `@langchain/community/chat_models/ollama`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
You can see a simple LangSmith trace of this here: [https://smith.langchain.com/public/92aebeca-d701-4de0-a845-f55df04eff04/r](https://smith.langchain.com/public/92aebeca-d701-4de0-a845-f55df04eff04/r)
Multimodal models[β](#multimodal-models "Direct link to Multimodal models")
---------------------------------------------------------------------------
Ollama supports open source multimodal models like [LLaVA](https://ollama.ai/library/llava) in versions 0.1.15 and up. You can pass images as part of a message's `content` field to multimodal-capable models like this:
import { ChatOllama } from "@langchain/community/chat_models/ollama";import { HumanMessage } from "@langchain/core/messages";import * as fs from "node:fs/promises";const imageData = await fs.readFile("./hotdog.jpg");const chat = new ChatOllama({ model: "llava", baseUrl: "http://127.0.0.1:11434",});const res = await chat.invoke([ new HumanMessage({ content: [ { type: "text", text: "What is in this image?", }, { type: "image_url", image_url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, ], }),]);console.log(res);/* AIMessage { content: ' The image shows a hot dog with ketchup on it, placed on top of a bun. It appears to be a close-up view, possibly taken in a kitchen setting or at an outdoor event.', name: undefined, additional_kwargs: {} }*/
#### API Reference:
* [ChatOllama](https://api.js.langchain.com/classes/langchain_community_chat_models_ollama.ChatOllama.html) from `@langchain/community/chat_models/ollama`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
This will currently not use the image's position within the prompt message as additional information, and will just pass the image along as context with the rest of the prompt messages.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
NIBittensorChatModel
](/v0.1/docs/integrations/chat/ni_bittensor/)[
Next
Ollama Functions
](/v0.1/docs/integrations/chat/ollama_functions/)
* [Setup](#setup)
* [Usage](#usage)
* [JSON mode](#json-mode)
* [Multimodal models](#multimodal-models)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/hyde/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Decomposition](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Expansion](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Hypothetical Document Embeddings](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Routing](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Step Back Prompting](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Structuring](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Techniques
* Hypothetical Document Embeddings
On this page
Hypothetical Document Embeddings
================================
If weβre working with a similarity search-based index, like a vector store, then searching on raw questions may not work well because their embeddings may not be very similar to those of the relevant documents. Instead it might help to have the model generate a hypothetical relevant document, and then use that to perform similarity search. This is the key idea behind [Hypothetical Document Embedding, or HyDE](https://arxiv.org/pdf/2212.10496.pdf).
Letβs take a look at how we might perform search via hypothetical documents for our Q&A bot over the LangChain YouTube videos.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod
yarn add @langchain/core zod
pnpm add @langchain/core zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Hypothetical document generation[β](#hypothetical-document-generation "Direct link to Hypothetical document generation")
------------------------------------------------------------------------------------------------------------------------
Ultimately generating a relevant hypothetical document reduces to trying to answer the user question. Since weβre desiging a Q&A bot for LangChain YouTube videos, weβll provide some basic context about LangChain and prompt the model to use a more pedantic style so that we get more realistic hypothetical documents:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const system = `You are an expert about a set of software for building LLM-powered applications called LangChain, LangGraph, LangServe, and LangSmith.LangChain is a Python framework that provides a large set of integrations that can easily be composed to build LLM applications.LangGraph is a Python package built on top of LangChain that makes it easy to build stateful, multi-actor LLM applications.LangServe is a Python package built on top of LangChain that makes it easy to deploy a LangChain application as a REST API.LangSmith is a platform that makes it easy to trace and test LLM applications.Answer the user question as best you can. Answer as though you were writing a tutorial that addressed the user question. `;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const qaNoContext = prompt.pipe(llm).pipe(new StringOutputParser());
const answer = await qaNoContext.invoke({ question: "how to use multi-modal models in a chain and turn chain into a rest api",});console.log(answer);
To use multi-modal models in a chain and turn the chain into a REST API, you can leverage the capabilities of LangChain, LangGraph, and LangServe. Here's a step-by-step guide on how to achieve this:1. **Set up LangChain**: Start by installing LangChain, LangGraph, and LangServe in your Python environment. You can do this using pip:```bashpip install langchain langgraph langserve```2. **Build a Multi-Modal Model**: Create your multi-modal model using LangChain. LangChain provides integrations with various deep learning frameworks like TensorFlow, PyTorch, and Hugging Face Transformers. You can easily compose different modalities (text, image, audio, etc.) in your model.3. **Use LangGraph for Stateful Multi-Actor Applications**: If your multi-modal model requires stateful interactions between different actors, you can use LangGraph to build such applications. LangGraph simplifies the process of managing state and interactions in your LLM application.4. **Deploy as a REST API using LangServe**: Once you have built your multi-modal model and defined the interactions using LangGraph, you can deploy your chain as a REST API using LangServe. LangServe makes it easy to expose your LangChain application as a web service, allowing users to interact with your model through HTTP requests.5. **Define Endpoints**: In your LangServe application, define the endpoints that correspond to different functionalities of your multi-modal model. For example, you can have endpoints for text input, image input, audio input, etc.6. **Handle Requests**: Implement the logic to handle incoming requests in your LangServe application. Parse the input data, pass it through your multi-modal model, and return the results in the desired format.7. **Start the LangServe Server**: Once you have defined your endpoints and request handling logic, start the LangServe server to make your multi-modal model accessible as a REST API. You can specify the host, port, and other configurations when starting the server.By following these steps, you can effectively use multi-modal models in a chain and expose it as a REST API using LangChain, LangGraph, and LangServe. This approach allows you to build complex LLM applications with stateful interactions and make them accessible to users through a web interface.
Returning the hypothetical document and original question[β](#returning-the-hypothetical-document-and-original-question "Direct link to Returning the hypothetical document and original question")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To increase our recall we may want to retrieve documents based on both the hypothetical document and the original question. We can easily return both like so:
import { RunnablePassthrough } from "@langchain/core/runnables";const hydeChain = RunnablePassthrough.assign({ hypotheticalDocument: qaNoContext,});await hydeChain.invoke({ question: "how to use multi-modal models in a chain and turn chain into a rest api",});
{ question: "how to use multi-modal models in a chain and turn chain into a rest api", hypotheticalDocument: "To use multi-modal models in a chain and turn the chain into a REST API, you can leverage the capabi"... 1920 more characters}
Using function-calling to get structured output[β](#using-function-calling-to-get-structured-output "Direct link to Using function-calling to get structured output")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
If we were composing this technique with other query analysis techniques, weβd likely be using function calling to get out structured query objects. We can use function-calling for HyDE like so:
import { z } from "zod";const querySchema = z.object({ answer: z .string() .describe( "Answer the user question as best you can. Answer as though you were writing a tutorial that addressed the user question." ),});const system = `You are an expert about a set of software for building LLM-powered applications called LangChain, LangGraph, LangServe, and LangSmith.LangChain is a Python framework that provides a large set of integrations that can easily be composed to build LLM applications.LangGraph is a Python package built on top of LangChain that makes it easy to build stateful, multi-actor LLM applications.LangServe is a Python package built on top of LangChain that makes it easy to deploy a LangChain application as a REST API.LangSmith is a platform that makes it easy to trace and test LLM applications.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(querySchema, { name: "Query",});const hydeChain = prompt.pipe(llmWithTools);await hydeChain.invoke({ question: "how to use multi-modal models in a chain and turn chain into a rest api",});
{ answer: "To use multi-modal models in a chain and turn the chain into a REST API, you can follow these steps:"... 713 more characters}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Expansion
](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)[
Next
Routing
](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Setup](#setup)
* [Hypothetical document generation](#hypothetical-document-generation)
* [Returning the hypothetical document and original question](#returning-the-hypothetical-document-and-original-question)
* [Using function-calling to get structured output](#using-function-calling-to-get-structured-output)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/expansion/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Decomposition](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Expansion](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Hypothetical Document Embeddings](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Routing](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Step Back Prompting](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Structuring](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Techniques
* Expansion
On this page
Expansion
=========
Information retrieval systems can be sensitive to phrasing and specific keywords. To mitigate this, one classic retrieval technique is to generate multiple paraphrased versions of a query and return results for all versions of the query. This is called **query expansion**. LLMs are a great tool for generating these alternate versions of a query.
Letβs take a look at how we might do query expansion for our Q&A bot over the LangChain YouTube videos, which we started in the [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod
yarn add @langchain/core zod
pnpm add @langchain/core zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Query generation[β](#query-generation "Direct link to Query generation")
------------------------------------------------------------------------
To make sure we get multiple paraphrasings weβll use an LLM function-calling API.
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { z } from "zod";const paraphrasedQuerySchema = z .object({ paraphrasedQuery: z .string() .describe("A unique paraphrasing of the original question."), }) .describe( "You have performed query expansion to generate a paraphrasing of a question." );
import { ChatPromptTemplate } from "@langchain/core/prompts";const system = `You are an expert at converting user questions into database queries. You have access to a database of tutorial videos about a software library for building LLM-powered applications. Perform query expansion. If there are multiple common ways of phrasing a user question or common synonyms for key words in the question, make sure to return multiple versions of the query with the different phrasings.If there are acronyms or words you are not familiar with, do not try to rephrase them.Return at least 3 versions of the question.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(paraphrasedQuerySchema, { name: "ParaphrasedQuery",});const queryAnalyzer = prompt.pipe(llmWithTools);
Letβs see what queries our analyzer generates for the questions we searched earlier:
await queryAnalyzer.invoke({ question: "how to use multi-modal models in a chain and turn chain into a rest api",});
{ paraphrasedQuery: "How to utilize multi-modal models sequentially and convert the sequence into a REST API?"}
await queryAnalyzer.invoke({ question: "stream events from llm agent" });
{ paraphrasedQuery: "Retrieve real-time data from the LLM agent" }
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Decomposition
](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)[
Next
Hypothetical Document Embeddings
](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Setup](#setup)
* [Query generation](#query-generation)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/routing/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Decomposition](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Expansion](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Hypothetical Document Embeddings](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Routing](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Step Back Prompting](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Structuring](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Techniques
* Routing
On this page
Routing
=======
Sometimes we have multiple indexes for different domains, and for different questions we want to query different subsets of these indexes. For example, suppose we had one vector store index for all of the LangChain python documentation and one for all of the LangChain js documentation. Given a question about LangChain usage, weβd want to infer which language the the question was referring to and query the appropriate docs. **Query routing** is the process of classifying which index or subset of indexes a query should be performed on.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod
yarn add @langchain/core zod
pnpm add @langchain/core zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Routing with function calling models[β](#routing-with-function-calling-models "Direct link to Routing with function calling models")
------------------------------------------------------------------------------------------------------------------------------------
With function-calling models itβs simple to use models for classification, which is what routing comes down to:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";import { z } from "zod";const routeQuerySchema = z.object({ datasource: z .union([ z.literal("python_docs"), z.literal("js_docs"), z.literal("golang_docs"), ]) .describe( "Given a user question choose which datasource would be most relevant for answering their question" ),});const system = `You are an expert at routing a user question to the appropriate data source.Based on the programming language the question is referring to, route it to the relevant data source.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(routeQuerySchema, { name: "RouteQuery",});const router = prompt.pipe(llmWithTools);
const question = `Why doesn't the following code work:from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages(["human", "speak in {language}"])prompt.invoke("french")`;await router.invoke({ question: question });
{ datasource: "python_docs" }
const question = `Why doesn't the following code work:import { ChatPromptTemplate } from "@langchain/core/prompts";const chatPrompt = ChatPromptTemplate.fromMessages([ ["human", "speak in {language}"],]);const formattedChatPrompt = await chatPrompt.invoke({ input_language: "french"});`;await router.invoke({ question: question });
{ datasource: "js_docs" }
Routing to multiple indexes[β](#routing-to-multiple-indexes "Direct link to Routing to multiple indexes")
---------------------------------------------------------------------------------------------------------
If we may want to query multiple indexes we can do that, too, by updating our schema to accept a List of data sources:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { z } from "zod";const routeQuerySchema = z .object({ datasources: z .array( z.union([ z.literal("python_docs"), z.literal("js_docs"), z.literal("golang_docs"), ]) ) .describe( "Given a user question choose which datasources would be most relevant for answering their question" ), }) .describe("Route a user query to the most relevant datasource.");const llmWithTools = llm.withStructuredOutput(routeQuerySchema, { name: "RouteQuery",});const router = prompt.pipe(llmWithTools);await router.invoke({ question: "is there feature parity between the Python and JS implementations of OpenAI chat models",});
{ datasources: [ "python_docs", "js_docs" ] }
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Hypothetical Document Embeddings
](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)[
Next
Step Back Prompting
](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Setup](#setup)
* [Routing with function calling models](#routing-with-function-calling-models)
* [Routing to multiple indexes](#routing-to-multiple-indexes)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/step_back/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Decomposition](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Expansion](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Hypothetical Document Embeddings](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Routing](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Step Back Prompting](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Structuring](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Techniques
* Step Back Prompting
On this page
Step Back Prompting
===================
Sometimes search quality and model generations can be tripped up by the specifics of a question. One way to handle this is to first generate a more abstract, βstep backβ question and to query based on both the original and step back question.
For example, if we ask a question of the form βWhy does my LangGraph agent streamEvents return {LONG\_TRACE} instead of {DESIRED\_OUTPUT}β we will likely retrieve more relevant documents if we search with the more generic question βHow does streamEvents work with a LangGraph agentβ than if we search with the specific user question.
Letβs take a look at how we might use step back prompting in the context of our Q&A bot over the LangChain YouTube videos.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod
yarn add @langchain/core zod
pnpm add @langchain/core zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Step back question generation[β](#step-back-question-generation "Direct link to Step back question generation")
---------------------------------------------------------------------------------------------------------------
Generating good step back questions comes down to writing a good prompt:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const system = `You are an expert at taking a specific question and extracting a more generic question that gets at \the underlying principles needed to answer the specific question.You will be asked about a set of software for building LLM-powered applications called LangChain, LangGraph, LangServe, and LangSmith.LangChain is a Python framework that provides a large set of integrations that can easily be composed to build LLM applications.LangGraph is a Python package built on top of LangChain that makes it easy to build stateful, multi-actor LLM applications.LangServe is a Python package built on top of LangChain that makes it easy to deploy a LangChain application as a REST API.LangSmith is a platform that makes it easy to trace and test LLM applications.Given a specific user question about one or more of these products, write a more generic question that needs to be answered in order to answer the specific question. \If you don't recognize a word or acronym to not try to rewrite it.Write concise questions.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const stepBack = prompt.pipe(llm).pipe(new StringOutputParser());
const question = `I built a LangGraph agent using Gemini Pro and tools like vectorstores and duckduckgo search.How do I get just the LLM calls from the event stream`;const result = await stepBack.invoke({ question: question });console.log(result);
What are the specific methods or functions within LangGraph that allow for filtering or extracting LLM calls from an event stream?
Returning the stepback question and the original question[β](#returning-the-stepback-question-and-the-original-question "Direct link to Returning the stepback question and the original question")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To increase our recall weβll likely want to retrieve documents based on both the step back question and the original question. We can easily return both like so:
import { RunnablePassthrough } from "@langchain/core/runnables";const stepBackAndOriginal = RunnablePassthrough.assign({ stepBack });await stepBackAndOriginal.invoke({ question: question });
{ question: "I built a LangGraph agent using Gemini Pro and tools like vectorstores and duckduckgo search.\n" + "How do"... 47 more characters, stepBack: "What is the process for extracting specific types of calls, such as LLM calls, from an event stream "... 37 more characters}
Using function-calling to get structured output[β](#using-function-calling-to-get-structured-output "Direct link to Using function-calling to get structured output")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
If we were composing this technique with other query analysis techniques, weβd likely be using function calling to get out structured query objects. We can use function-calling for step back prompting like so:
import { z } from "zod";const stepBackQuerySchema = z.object({ stepBackQuestion: z .string() .describe( "Given a specific user question about one or more of these products, write a more generic question that needs to be answered in order to answer the specific question." ),});const llmWithTools = llm.withStructuredOutput(stepBackQuerySchema, { name: "StepBackQuery",});const hydeChain = prompt.pipe(llmWithTools);await hydeChain.invoke({ question: question });
{ stepBackQuestion: "What are the steps involved in extracting specific types of calls from an event stream in a software"... 13 more characters}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Routing
](/v0.1/docs/use_cases/query_analysis/techniques/routing/)[
Next
Structuring
](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [Setup](#setup)
* [Step back question generation](#step-back-question-generation)
* [Returning the stepback question and the original question](#returning-the-stepback-question-and-the-original-question)
* [Using function-calling to get structured output](#using-function-calling-to-get-structured-output)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/structuring/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/)
* [Techniques](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Decomposition](/v0.1/docs/use_cases/query_analysis/techniques/decomposition/)
* [Expansion](/v0.1/docs/use_cases/query_analysis/techniques/expansion/)
* [Hypothetical Document Embeddings](/v0.1/docs/use_cases/query_analysis/techniques/hyde/)
* [Routing](/v0.1/docs/use_cases/query_analysis/techniques/routing/)
* [Step Back Prompting](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)
* [Structuring](/v0.1/docs/use_cases/query_analysis/techniques/structuring/)
* [How-To Guides](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* Techniques
* Structuring
On this page
Structuring
===========
One of the most important steps in retrieval is turning a text input into the right search and filter parameters. This process of extracting structured parameters from an unstructured input is what we refer to as **query structuring**.
To illustrate, letβs return to our example of a Q&A bot over the LangChain YouTube videos from the [Quickstart](/v0.1/docs/use_cases/query_analysis/quickstart/) and see what more complex structured queries might look like in this case.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/core zod
yarn add @langchain/core zod
pnpm add @langchain/core zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
### Load example document[β](#load-example-document "Direct link to Load example document")
Letβs say we loaded a document with the following metadata:
{ "source": "pbAd8O1Lvm4", "title": "Self-reflective RAG with LangGraph: Self-RAG and CRAG", "description": "Unknown", "view_count": 9006, "thumbnail_url": "https://i.ytimg.com/vi/pbAd8O1Lvm4/hq720.jpg", "publish_date": "2024-02-07 00:00:00", "length": 1058, "author": "LangChain"}
Query schema[β](#query-schema "Direct link to Query schema")
------------------------------------------------------------
In order to generate structured queries we first need to define our query schema. We can see that each document has a title, view count, publication date, and length in seconds. Letβs assume weβve built an index that allows us to perform unstructured search over the contents and title of each document, and to use range filtering on view count, publication date, and length.
To start weβll create a schema with explicit min and max attributes for view count, publication date, and video length so that those can be filtered on. And weβll add separate attributes for searches against the transcript contents versus the video title.
We could alternatively create a more generic schema where instead of having one or more filter attributes for each filterable field, we have a single `filters` attribute that takes a list of (attribute, condition, value) tuples. Weβll demonstrate how to do this as well. Which approach works best depends on the complexity of your index. If you have many filterable fields then it may be better to have a single `filters` query attribute. If you have only a few filterable fields and/or there are fields that can only be filtered in very specific ways, it can be helpful to have separate query attributes for them, each with their own description.
import { RunnableLambda } from "@langchain/core/runnables";import { z } from "zod";const tutorialSearch = z.object({ content_search: z .string() .describe("Similarity search query applied to video transcripts."), title_search: z .string() .describe( "Alternate version of the content search query to apply to video titles. Should be succinct and only include key words that could be in a video title." ), min_view_count: z .number() .optional() .describe( "Minimum view count filter, inclusive. Only use if explicitly specified." ), max_view_count: z .number() .optional() .describe( "Maximum view count filter, exclusive. Only use if explicitly specified." ), earliest_publish_date: z .date() .optional() .describe( "Earliest publish date filter, inclusive. Only use if explicitly specified." ), latest_publish_date: z .date() .optional() .describe( "Latest publish date filter, exclusive. Only use if explicitly specified." ), min_length_sec: z .number() .optional() .describe( "Minimum video length in seconds, inclusive. Only use if explicitly specified." ), max_length_sec: z .number() .optional() .describe( "Maximum video length in seconds, exclusive. Only use if explicitly specified." ),});const prettyPrint = (obj: z.infer<typeof tutorialSearch>) => { for (const field in obj) { if (obj[field] !== undefined) { console.log(`${field}: ${JSON.stringify(obj[field], null, 2)}`); } }};const prettyPrintRunnable = new RunnableLambda({ func: prettyPrint,}).withConfig({ runName: "prettyPrint" });
Query generation[β](#query-generation "Direct link to Query generation")
------------------------------------------------------------------------
To convert user questions to structured queries weβll make use of a function-calling model. LangChain has some nice constructors that make it easy to specify a desired function call schema via a Zod schema:
### Pick your chat model:
* OpenAI
* Anthropic
* FireworksAI
* MistralAI
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
#### Add environment variables
OPENAI_API_KEY=your-api-key
#### Instantiate the model
import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
#### Add environment variables
ANTHROPIC_API_KEY=your-api-key
#### Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";const llm = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
#### Add environment variables
FIREWORKS_API_KEY=your-api-key
#### Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";const llm = new ChatFireworks({ model: "accounts/fireworks/models/firefunction-v1", temperature: 0});
#### Install dependencies
tip
See [this section for general instructions on installing integration packages](/docs/get_started/installation#installing-integration-packages).
* npm
* yarn
* pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
#### Add environment variables
MISTRAL_API_KEY=your-api-key
#### Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";const llm = new ChatMistralAI({ model: "mistral-large-latest", temperature: 0});
import { ChatPromptTemplate } from "@langchain/core/prompts";const system = `You are an expert at converting user questions into database queries.You have access to a database of tutorial videos about a software library for building LLM-powered applications.Given a question, return a database query optimized to retrieve the most relevant results.If there are acronyms or words you are not familiar with, do not try to rephrase them.`;const prompt = ChatPromptTemplate.fromMessages([ ["system", system], ["human", "{question}"],]);const llmWithTools = llm.withStructuredOutput(tutorialSearch, { name: "TutorialSearch",});const queryAnalyzer = prompt.pipe(llmWithTools);
Letβs try it out:
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "rag from scratch" });
content_search: "rag from scratch"title_search: "rag from scratch"
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "videos on chat langchain published in 2023" });
content_search: "chat langchain"title_search: "2023"earliest_publish_date: "2023-01-01T00:00:00Z"latest_publish_date: "2024-01-01T00:00:00Z"
await queryAnalyzer.pipe(prettyPrintRunnable).invoke({ question: "how to use multi-modal models in an agent, only videos under 5 minutes",});
content_search: "multi-modal models agent"title_search: "multi-modal models agent"max_length_sec: 300
Alternative: Succinct schema[β](#alternative-succinct-schema "Direct link to Alternative: Succinct schema")
-----------------------------------------------------------------------------------------------------------
If we have many filterable fields then having a verbose schema could harm performance, or may not even be possible given limitations on the size of function schemas. In these cases we can try more succinct query schemas that trade off some explicitness of direction for concision:
import { z } from "zod";const Filter = z.object({ field: z.union([ z.literal("view_count"), z.literal("publish_date"), z.literal("length_sec"), ]), comparison: z.union([ z.literal("eq"), z.literal("lt"), z.literal("lte"), z.literal("gt"), z.literal("gte"), ]), value: z .union([ z.number(), z.string().refine((data) => !isNaN(Date.parse(data)), { message: "If field is publish_date then value must be a ISO-8601 format date", }), ]) .describe( "If field is publish_date then value must be a ISO-8601 format date" ),});const tutorialSearch = z.object({ content_search: z .string() .describe("Similarity search query applied to video transcripts."), title_search: z .string() .describe( "Alternate version of the content search query to apply to video titles. " + "Should be succinct and only include key words that could be in a video title." ), filters: z .array(Filter) .default([]) .describe( "Filters over specific fields. Final condition is a logical conjunction of all filters." ),});
const llmWithTools = llm.withStructuredOutput(tutorialSearch, { name: "TutorialSearch",});const queryAnalyzer = prompt.pipe(llmWithTools);
Letβs try it out:
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "rag from scratch" });
content_search: "rag from scratch"title_search: "rag"filters: []
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "videos on chat langchain published in 2023" });
content_search: "chat langchain"title_search: "chat langchain"filters: [ { "field": "publish_date", "comparison": "gte", "value": "2023-01-01" }]
await queryAnalyzer.pipe(prettyPrintRunnable).invoke({ question: "how to use multi-modal models in an agent, only videos under 5 minutes and with over 276 views",});
content_search: "multi-modal models in an agent"title_search: "multi-modal models"filters: [ { "field": "length_sec", "comparison": "lt", "value": 300 }, { "field": "view_count", "comparison": "gte", "value": 276 }]
We can see that the analyzer handles integers well but struggles with date ranges. We can try adjusting our schema description and/or our prompt to correct this:
import { z } from "zod";const tutorialSearch = z.object({ content_search: z .string() .describe("Similarity search query applied to video transcripts."), title_search: z .string() .describe( "Alternate version of the content search query to apply to video titles. " + "Should be succinct and only include key words that could be in a video title." ), filters: z .array(Filter) .default([]) .describe( "Filters over specific fields. Final condition is a logical conjunction of all filters. " + "If a time period longer than one day is specified then it must result in filters that define a date range. " + `Keep in mind the current date is ${ new Date().toISOString().split("T")[0] }.` ),});
const llmWithTools = llm.withStructuredOutput(tutorialSearch, { name: "TutorialSearch",});const queryAnalyzer = prompt.pipe(llmWithTools);
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "videos on chat langchain published in 2023" });
content_search: "chat langchain"title_search: "chat langchain"filters: [ { "field": "publish_date", "comparison": "eq", "value": "2023" }]
This seems to work!
Sorting: Going beyond search[β](#sorting-going-beyond-search "Direct link to Sorting: Going beyond search")
-----------------------------------------------------------------------------------------------------------
With certain indexes searching by field isnβt the only way to retrieve results βΒ we can also sort documents by a field and retrieve the top sorted results. With structured querying this is easy to accomodate by adding separate query fields that specify how to sort results.
const tutorialSearch = z.object({ content_search: z .string() .default("") .describe("Similarity search query applied to video transcripts."), title_search: z .string() .default("") .describe( "Alternate version of the content search query to apply to video titles. " + "Should be succinct and only include key words that could be in a video title." ), min_view_count: z .number() .optional() .describe("Minimum view count filter, inclusive."), max_view_count: z .number() .optional() .describe("Maximum view count filter, exclusive."), earliest_publish_date: z .date() .optional() .describe("Earliest publish date filter, inclusive."), latest_publish_date: z .date() .optional() .describe("Latest publish date filter, exclusive."), min_length_sec: z .number() .optional() .describe("Minimum video length in seconds, inclusive."), max_length_sec: z .number() .optional() .describe("Maximum video length in seconds, exclusive."), sort_by: z .enum(["relevance", "view_count", "publish_date", "length"]) .default("relevance") .describe("Attribute to sort by."), sort_order: z .enum(["ascending", "descending"]) .default("descending") .describe("Whether to sort in ascending or descending order."),});
const llmWithTools = llm.withStructuredOutput(tutorialSearch, { name: "TutorialSearch",});const queryAnalyzer = prompt.pipe(llmWithTools);
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "What has LangChain released lately?" });
title_search: "LangChain"sort_by: "publish_date"sort_order: "descending"
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "What are the longest videos?" });
sort_by: "length"sort_order: "descending"
We can even support searching and sorting together. This might look like first retrieving all results above a relevancy threshold and then sorting them according to the specified attribute:
await queryAnalyzer .pipe(prettyPrintRunnable) .invoke({ question: "What are the shortest videos about agents?" });
content_search: "agents"sort_by: "length"sort_order: "ascending"
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Step Back Prompting
](/v0.1/docs/use_cases/query_analysis/techniques/step_back/)[
Next
Add Examples to the Prompt
](/v0.1/docs/use_cases/query_analysis/how_to/few_shot/)
* [Setup](#setup)
* [Load example document](#load-example-document)
* [Query schema](#query-schema)
* [Query generation](#query-generation)
* [Alternative: Succinct schema](#alternative-succinct-schema)
* [Sorting: Going beyond search](#sorting-going-beyond-search)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/openai/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* OpenAI
On this page
OpenAI
======
The `OpenAIEmbeddings` class uses the OpenAI API to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing `stripNewLines: false` to the constructor.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings({ apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY batchSize: 512, // Default value if omitted is 512. Max is 2048 model: "text-embedding-3-large",});
If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` to your OpenAI organization id, or pass it in as `organization` when initializing the model.
Specifying dimensions[β](#specifying-dimensions "Direct link to Specifying dimensions")
---------------------------------------------------------------------------------------
With the `text-embedding-3` class of models, you can specify the size of the embeddings you want returned. For example by default `text-embedding-3-large` returns embeddings of dimension 3072:
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-large",});const vectors = await embeddings.embedDocuments(["some text"]);console.log(vectors[0].length);
3072
But by passing in `dimensions: 1024` we can reduce the size of our embeddings to 1024:
const embeddings1024 = new OpenAIEmbeddings({ model: "text-embedding-3-large", dimensions: 1024,});const vectors2 = await embeddings1024.embedDocuments(["some text"]);console.log(vectors2[0].length);
1024
Custom URLs[β](#custom-urls "Direct link to Custom URLs")
---------------------------------------------------------
You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this:
const model = new OpenAIEmbeddings({ configuration: { baseURL: "https://your_custom_url.com", },});
You can also pass other `ClientOptions` parameters accepted by the official SDK.
If you are hosting on Azure OpenAI, see the [dedicated page instead](/v0.1/docs/integrations/text_embedding/azure_openai/).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Ollama
](/v0.1/docs/integrations/text_embedding/ollama/)[
Next
Prem AI
](/v0.1/docs/integrations/text_embedding/premai/)
* [Specifying dimensions](#specifying-dimensions)
* [Custom URLs](#custom-urls)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/milvus/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Milvus
On this page
Milvus
======
[Milvus](https://milvus.io/) is a vector database built for embeddings similarity search and AI applications.
Compatibility
Only available on Node.js.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
1. Run Milvus instance with Docker on your computer [docs](https://milvus.io/docs/v2.1.x/install_standalone-docker.md)
2. Install the Milvus Node.js SDK.
* npm
* Yarn
* pnpm
npm install -S @zilliz/milvus2-sdk-node
yarn add @zilliz/milvus2-sdk-node
pnpm add @zilliz/milvus2-sdk-node
3. Setup Env variables for Milvus before running the code
3.1 OpenAI
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530
3.2 Azure OpenAI
export AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HEREexport AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HEREexport AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HEREexport AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HEREexport AZURE_OPENAI_BASE_PATH=YOUR_AZURE_OPENAI_BASE_PATH_HEREexport MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530
Index and query docs[β](#index-and-query-docs "Direct link to Index and query docs")
------------------------------------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "@langchain/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Milvus.fromTexts( [ "Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little\ Harmonic Labyrinth of the dreaded Majotaur?", "Achilles: Yiikes! What is that?", "Tortoise: They say-although I person never believed it myself-that an I\ Majotaur has created a tiny labyrinth sits in a pit in the middle of\ it, waiting innocent victims to get lost in its fears complexity.\ Then, when they wander and dazed into the center, he laughs and\ laughs at them-so hard, that he laughs them to death!", "Achilles: Oh, no!", "Tortoise: But it's only a myth. Courage, Achilles.", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }], new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });// or alternatively from docsconst vectorStore = await Milvus.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach",});const response = await vectorStore.similaritySearch("scared", 2);
Query docs from existing collection[β](#query-docs-from-existing-collection "Direct link to Query docs from existing collection")
---------------------------------------------------------------------------------------------------------------------------------
import { Milvus } from "langchain/vectorstores/milvus";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await Milvus.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "goldel_escher_bach", });const response = await vectorStore.similaritySearch("scared", 2);
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
LanceDB
](/v0.1/docs/integrations/vectorstores/lancedb/)[
Next
Momento Vector Index (MVI)
](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [Setup](#setup)
* [Index and query docs](#index-and-query-docs)
* [Query docs from existing collection](#query-docs-from-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/docs/modules/data_connection/text_embedding/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Dealing with API errors](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)
* [Caching](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)
* [Dealing with rate limits](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)
* [Adding a timeout](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* Text embedding models
On this page
Text embedding models
=====================
info
Head to [Integrations](/v0.1/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding providers.
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
Get started[β](#get-started "Direct link to Get started")
---------------------------------------------------------
Embeddings can be used to create a numerical representation of textual data. This numerical representation is useful because it can be used to find similar documents.
Below is an example of how to use the OpenAI embeddings. Embeddings occasionally have different embedding methods for queries versus documents, so the embedding class exposes a `embedQuery` and `embedDocuments` method.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";/* Create instance */const embeddings = new OpenAIEmbeddings();/* Embed queries */const res = await embeddings.embedQuery("Hello world");/*[ -0.004845875, 0.004899438, -0.016358767, -0.024475135, -0.017341806, 0.012571548, -0.019156644, 0.009036391, -0.010227379, -0.026945334, 0.022861943, 0.010321903, -0.023479493, -0.0066544134, 0.007977734, 0.0026371893, 0.025206111, -0.012048521, 0.012943339, 0.013094575, -0.010580265, -0.003509951, 0.004070787, 0.008639394, -0.020631202, -0.0019203906, 0.012161949, -0.019194454, 0.030373365, -0.031028723, 0.0036170771, -0.007813894, -0.0060778237, -0.017820721, 0.0048647798, -0.015640393, 0.001373733, -0.015552171, 0.019534737, -0.016169721, 0.007316074, 0.008273906, 0.011418369, -0.01390117, -0.033347685, 0.011248227, 0.0042503807, -0.012792102, -0.0014595914, 0.028356876, 0.025407761, 0.00076445413, -0.016308354, 0.017455231, -0.016396577, 0.008557475, -0.03312083, 0.031104341, 0.032389853, -0.02132437, 0.003324056, 0.0055610985, -0.0078012915, 0.006090427, 0.0062038545, 0.0169133, 0.0036391325, 0.0076815626, -0.018841568, 0.026037913, 0.024550753, 0.0055264398, -0.0015824712, -0.0047765584, 0.018425668, 0.0030656934, -0.0113742575, -0.0020322427, 0.005069579, 0.0022701253, 0.036095154, -0.027449455, -0.008475555, 0.015388331, 0.018917186, 0.0018999106, -0.003349262, 0.020895867, -0.014480911, -0.025042271, 0.012546342, 0.013850759, 0.0069253794, 0.008588983, -0.015199285, -0.0029585673, -0.008759124, 0.016749462, 0.004111747, -0.04804285, ... 1436 more items]*//* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);/*[ [ -0.0047852774, 0.0048640342, -0.01645707, -0.024395779, -0.017263541, 0.012512918, -0.019191515, 0.009053908, -0.010213212, -0.026890801, 0.022883644, 0.010251015, -0.023589306, -0.006584088, 0.007989113, 0.002720268, 0.025088841, -0.012153786, 0.012928754, 0.013054766, -0.010395928, -0.0035566676, 0.0040008575, 0.008600268, -0.020678446, -0.0019106456, 0.012178987, -0.019241918, 0.030444318, -0.03102397, 0.0035692686, -0.007749692, -0.00604854, -0.01781799, 0.004860884, -0.015612794, 0.0014097509, -0.015637996, 0.019443536, -0.01612944, 0.0072960514, 0.008316742, 0.011548932, -0.013987249, -0.03336778, 0.011341013, 0.00425603, -0.0126578305, -0.0013861238, 0.028302127, 0.025466874, 0.0007029065, -0.016318457, 0.017427357, -0.016394064, 0.008499459, -0.033241767, 0.031200387, 0.03238489, -0.0212833, 0.0032416396, 0.005443686, -0.007749692, 0.0060201874, 0.006281661, 0.016923312, 0.003528315, 0.0076740854, -0.01881348, 0.026109532, 0.024660403, 0.005472039, -0.0016712243, -0.0048136297, 0.018397642, 0.003011669, -0.011385117, -0.0020193304, 0.005138109, 0.0022335495, 0.03603922, -0.027495656, -0.008575066, 0.015436378, 0.018851284, 0.0018019609, -0.0034338066, 0.02094307, -0.014503895, -0.024950229, 0.012632628, 0.013735226, 0.0069936244, 0.008575066, -0.015196957, -0.0030541976, -0.008745181, 0.016746895, 0.0040481114, -0.048010286, ... 1436 more items ], [ -0.009446913, -0.013253193, 0.013174579, 0.0057552797, -0.038993083, 0.0077763423, -0.0260478, -0.0114384955, -0.0022683728, -0.016509168, 0.041797023, 0.01787183, 0.00552271, -0.0049789557, 0.018146982, -0.01542166, 0.033752076, 0.006112323, 0.023872782, -0.016535373, -0.006623321, 0.016116094, -0.0061090477, -0.0044155475, -0.016627092, -0.022077737, -0.0009286407, -0.02156674, 0.011890532, -0.026283644, 0.02630985, 0.011942943, -0.026126415, -0.018264906, -0.014045896, -0.024187243, -0.019037955, -0.005037917, 0.020780588, -0.0049527506, 0.002399398, 0.020767486, 0.0080908025, -0.019666875, -0.027934562, 0.017688395, 0.015225122, 0.0046186363, -0.0045007137, 0.024265857, 0.03244183, 0.0038848957, -0.03244183, -0.018893827, -0.0018065092, 0.023440398, -0.021763276, 0.015120302, -0.01568371, -0.010861984, 0.011739853, -0.024501702, -0.005214801, 0.022955606, 0.001315165, -0.00492327, 0.0020358032, -0.003468891, -0.031079166, 0.0055259857, 0.0028547104, 0.012087069, 0.007992534, -0.0076256637, 0.008110457, 0.002998838, -0.024265857, 0.006977089, -0.015185814, -0.0069115767, 0.006466091, -0.029428247, -0.036241557, 0.036713246, 0.032284595, -0.0021144184, -0.014255536, 0.011228855, -0.027227025, -0.021619149, 0.00038242966, 0.02245771, -0.0014748519, 0.01573612, 0.0041010873, 0.006256451, -0.007992534, 0.038547598, 0.024658933, -0.012958387, ... 1436 more items ]]*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Retrieval
](/v0.1/docs/modules/data_connection/)[
Next
Dealing with API errors
](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)
* [Get started](#get-started)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/docs/modules/data_connection/vectorstores/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Custom vectorstores](/v0.1/docs/modules/data_connection/vectorstores/custom/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* Vector stores
On this page
Vector stores
=============
info
Head to [Integrations](/v0.1/docs/integrations/vectorstores/) for documentation on built-in integrations with vectorstore providers.
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
Get started[β](#get-started "Direct link to Get started")
---------------------------------------------------------
This walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [text embedding model](/v0.1/docs/modules/data_connection/text_embedding/) interfaces before diving into this.
This walkthrough uses a basic, unoptimized implementation called MemoryVectorStore that stores embeddings in-memory and does an exact, linear search for the most similar embeddings.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[β](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[β](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Here is the current base interface all vector stores share:
interface VectorStore { /** * Add more documents to an existing VectorStore. * Some providers support additional parameters, e.g. to associate custom ids * with added documents or to change the batch size of bulk inserts. * Returns an array of ids for the documents or nothing. */ addDocuments( documents: Document[], options?: Record<string, any> ): Promise<string[] | void>; /** * Search for the most similar documents to a query */ similaritySearch( query: string, k?: number, filter?: object | undefined ): Promise<Document[]>; /** * Search for the most similar documents to a query, * and return their similarity score */ similaritySearchWithScore( query: string, k = 4, filter: object | undefined = undefined ): Promise<[object, number][]>; /** * Turn a VectorStore into a Retriever */ asRetriever(k?: number): BaseRetriever; /** * Delete embedded documents from the vector store matching the passed in parameter. * Not supported by every provider. */ delete(params?: Record<string, any>): Promise<void>; /** * Advanced: Add more documents to an existing VectorStore, * when you already have their embeddings */ addVectors( vectors: number[][], documents: Document[], options?: Record<string, any> ): Promise<string[] | void>; /** * Advanced: Search for the most similar documents to a query, * when you already have the embedding of the query */ similaritySearchVectorWithScore( query: number[], k: number, filter?: object ): Promise<[Document, number][]>;}
You can create a vector store from a list of [Documents](https://api.js.langchain.com/classes/langchain_core_documents.Document.html), or from a list of texts and their corresponding metadata. You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.
abstract class BaseVectorStore implements VectorStore { static fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig: Record<string, any> ): Promise<VectorStore>; static fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig: Record<string, any> ): Promise<VectorStore>;}
Which one to pick?[β](#which-one-to-pick "Direct link to Which one to pick?")
-----------------------------------------------------------------------------
Here's a quick guide to help you pick the right vector store for your use case:
* If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/), [Faiss](/v0.1/docs/integrations/vectorstores/faiss/), [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/) or [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* If you're looking for something that can run in-memory in browser-like environments, then go for [MemoryVectorStore](/v0.1/docs/integrations/vectorstores/memory/) or [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* If you come from Python and you were looking for something similar to FAISS, try [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/) or [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* If you're looking for an open-source full-featured vector database that you can run locally in a docker container, then go for [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* If you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* If you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/).
* If you're using Supabase already then look at the [Supabase](/v0.1/docs/integrations/vectorstores/supabase/) vector store to use the same Postgres database for your embeddings too
* If you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* If you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/) vector store.
* If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/) vector store.
* If you're in search of a cost-effective vector database that allows run vector search with SQL, look no further than [MyScale](/v0.1/docs/integrations/vectorstores/myscale/).
* If you're in search of a vector database that you can load from both the browser and server side, check out [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/). It's a vector database that aims to be cross-platform.
* If you're looking for a scalable, open-source columnar database with excellent performance for analytical queries, then consider [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Adding a timeout
](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)[
Next
Custom vectorstores
](/v0.1/docs/modules/data_connection/vectorstores/custom/)
* [Get started](#get-started)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Which one to pick?](#which-one-to-pick)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/api/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* Interacting with APIs
Interacting with APIs
=====================
Lots of data and information is stored behind APIs. This page covers all resources available in LangChain for working with APIs.
Chains[β](#chains "Direct link to Chains")
------------------------------------------
If you are just getting started and you have relatively simple APIs, you should get started with chains. Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you understand what is happening better.
* [OpenAPI Chain](/v0.1/docs/modules/chains/additional/openai_functions/openapi/)
Agents[β](#agents "Direct link to Agents")
------------------------------------------
Agents are more complex, and involve multiple queries to the LLM to understand what to do. The downside of agents are that you have less control. The upside is that they are more powerful, which allows you to use them on larger and more complex schemas.
* [OpenAPI Agent](/v0.1/docs/integrations/toolkits/openapi/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Human-in-the-loop
](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)[
Next
Tabular Question Answering
](/v0.1/docs/use_cases/tabular/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/tool_use/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/)
* [Agents](/v0.1/docs/use_cases/tool_use/agents/)
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/)
* [Tool error handling](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Human-in-the-loop](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* Quickstart
On this page
Quickstart
==========
In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Tools can be just about anything β APIs, functions, databases, etc. Tools allow us to extend the capabilities of a model beyond just outputting text/messages. The key to using models with tools is correctly prompting a model and parsing its response so that it chooses the right tools and provides the right inputs for them.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Weβll use OpenAI for this guide, and will need to install its partner package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named `OPENAI_API_KEY`.
We'll also use the popular validation library [Zod](https://zod.dev) to define our tool schemas. It's already a dependency of `langchain`, but you can install it explicitly like this too:
* npm
* Yarn
* pnpm
npm install zod
yarn add zod
pnpm add zod
Create a tool[β](#create-a-tool "Direct link to Create a tool")
---------------------------------------------------------------
First, we need to create a tool to call. For this example, we will create a custom tool from a function. For more information on creating custom tools, please see [this guide](/v0.1/docs/modules/agents/tools/).
import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "Multiply two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt * secondInt).toString(); },});await multiplyTool.invoke({ firstInt: 4, secondInt: 5 });
20
Chains[β](#chains "Direct link to Chains")
------------------------------------------
If we know that we only need to use a tool a fixed number of times, we can create a chain for doing so. Letβs create a simple chain that just multiplies user-specified numbers.
### Function calling[β](#function-calling "Direct link to Function calling")
One of the most reliable ways to use tools with LLMs is with function calling APIs (also sometimes called tool calling or parallel function calling). This only works with models that explicitly support function calling, like OpenAI models.
First weβll define our model:
import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106",});
Next weβll convert our LangChain Tool to the format an OpenAI function accepts, [JSONSchema](https://json-schema.org/), and bind this as the tools argument to be passed to all ChatOpenAI calls. Since we only have a single Tool and in this initial chain we want to make sure itβs always used, weβll also specify `tool_choice`. See the [OpenAI chat API reference](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice) for more on these parameters.
import { convertToOpenAITool } from "@langchain/core/utils/function_calling";const formattedTools = [convertToOpenAITool(multiplyTool)];
[ { "type": "function", "function": { "name": "multiply", "description": "Multiply two integers together.", "parameters": { "type": "object", "properties": { "firstInt": { "type": "number" }, "secondInt": { "type": "number" } }, "required": [ "firstInt", "secondInt" ], "additionalProperties": false, "$schema": "http://json-schema.org/draft-07/schema#" } } }]
const modelWithTools = model.bind({ tools: formattedTools, // We specify tool_choice to enforce that the 'multiply' function is called by the model. tool_choice: { type: "function", function: { name: "multiply" }, },});
Now weβll compose out tool-calling model with a [`JsonOutputToolsParser`](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputToolsParser.html), a built-in LangChain output parser that converts an OpenAI function-calling response to a list of `{"type": "TOOL_NAME", "args": {...}}` objects with the tools to invoke and arguments to invoke them with.
import { JsonOutputToolsParser } from "langchain/output_parsers";const chain = modelWithTools.pipe(new JsonOutputToolsParser());await chain.invoke("What's 4 times 23?");
[ { type: 'multiply', args: { firstInt: 4, secondInt: 23 } } ]
Since we know weβre always invoking the `multiply` tool, we can simplify our output a bit to return only the args for the `multiply` tool using the JsonOutputKeyToolsParser. To further simplify weβll specify `returnSingle: true`, so that instead of a list of tool invocations our output parser returns only the first tool invocation.
import { JsonOutputKeyToolsParser } from "langchain/output_parsers";const chain2 = modelWithTools.pipe( new JsonOutputKeyToolsParser({ keyName: "multiply", returnSingle: true }));await chain2.invoke("What's 4 times 23?");
{ firstInt: 4, secondInt: 23 }
### Invoking the tool[β](#invoking-the-tool "Direct link to Invoking the tool")
Great! Weβre able to generate tool invocations. But what if we want to actually call the tool with the LLM-generated args? To do that we just need to pass them to the tool:
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/4ac83d6a-4db9-467b-bb68-75b4efd1809f/r)
import { RunnableSequence } from "@langchain/core/runnables";const chain3 = RunnableSequence.from([ modelWithTools, new JsonOutputKeyToolsParser({ keyName: "multiply", returnSingle: true }), multiplyTool,]);await chain3.invoke("What's 4 times 23?");
92
And there we have our answer!
Agents[β](#agents "Direct link to Agents")
------------------------------------------
Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. That's where [Agents](/v0.1/docs/modules/agents/) come in!
LangChain comes with a number of built-in agents that are optimized for different use cases. Read about [all the available agent types](/v0.1/docs/modules/agents/agent_types/) here.
For this example, letβs try out the OpenAI tools agent, which makes use of the new OpenAI tool-calling API (this is only available in the latest OpenAI models, and differs from function-calling in that the model can return multiple function invocations at once).
Keep in mind that some agents only support single-argument tools - for these agents, you will need to use a `DynamicTool` instead and parse the input string yourself.
import { pull } from "langchain/hub";import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";import { ChatOpenAI } from "@langchain/openai";import type { ChatPromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// You can also see the full prompt at:// https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt = await pull<ChatPromptTemplate>("hwchase17/openai-tools-agent");
Agents can also choose between multiple tools to solve a problem. To learn how to build Chains that use multiple tools, check out the [Chains with multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/) guide.
import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";const addTool = new DynamicStructuredTool({ name: "add", description: "Add two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt + secondInt).toString(); },});const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "Multiply two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt * secondInt).toString(); },});const exponentiateTool = new DynamicStructuredTool({ name: "exponentiate", description: "Exponentiate the base to the exponent power.", schema: z.object({ base: z.number(), exponent: z.number(), }), func: async ({ base, exponent }) => { return (base ** exponent).toString(); },});const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const tools = [addTool, multiplyTool, exponentiateTool];const agent = await createOpenAIToolsAgent({ llm: model, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: true,});
With an agent, we can ask questions that require arbitrarily-many uses of our tools:
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/fc11cacc-f467-4c39-a46e-8bfefa37b1f9/r)
await agentExecutor.invoke({ input: "Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result",});
> Entering new AgentExecutor chain...Invoking: `exponentiate` with `{'base': 3, 'exponent': 5}`243Invoking: `add` with `{'first_int': 12, 'second_int': 3}`15Invoking: `multiply` with `{'first_int': 243, 'second_int': 15}`3645Invoking: `exponentiate` with `{'base': 3645, 'exponent': 2}`13286025The result of taking 3 to the fifth power and multiplying that by the sum of twelve and three, then squaring the whole result is 13,286,025.> Finished chain.
{ input: 'Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result', output: 'The result of taking 3 to the fifth power and multiplying that by the sum of twelve and three, then squaring the whole result is 13,286,025.'}
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
Here weβve gone over the basic ways to use Tools with Chains and Agents. We recommend the following sections to explore next:
* [Agents](/v0.1/docs/modules/agents/): Everything related to Agents.
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/): How to make tool chains that select from multiple tools.
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/): How to make tool chains that invoke multiple tools at once.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Tool use
](/v0.1/docs/use_cases/tool_use/)[
Next
Agents
](/v0.1/docs/use_cases/tool_use/agents/)
* [Setup](#setup)
* [Create a tool](#create-a-tool)
* [Chains](#chains)
* [Function calling](#function-calling)
* [Invoking the tool](#invoking-the-tool)
* [Agents](#agents)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/tool_use/agents/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/)
* [Agents](/v0.1/docs/use_cases/tool_use/agents/)
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/)
* [Tool error handling](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Human-in-the-loop](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* Agents
On this page
Agents
======
Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. That's where [Agents](/v0.1/docs/modules/agents/) come in!
LangChain comes with a number of built-in agents that are optimized for different use cases. Read about [all the available agent types](/v0.1/docs/modules/agents/agent_types/) here.
For this example, letβs try out the OpenAI tools agent, which makes use of the new OpenAI tool-calling API (this is only available in the latest OpenAI models, and differs from function-calling in that the model can return multiple function invocations at once).
Keep in mind that some agents only support single-argument tools - for these agents, you will need to use a `DynamicTool` instead and parse the input string yourself.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Because we're using OpenAI for this guide, we'll need to install its partner package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named `OPENAI_API_KEY`.
We'll also use the popular validation library [Zod](https://zod.dev) to define our tool schemas. It's already a dependency of `langchain`, but you can install it explicitly like this too:
* npm
* Yarn
* pnpm
npm install zod
yarn add zod
pnpm add zod
Create tools[β](#create-tools "Direct link to Create tools")
------------------------------------------------------------
First, we need to create some tool to call. For this example, we will create custom tools from functions. For more information on creating custom tools, please [see this guide](/v0.1/docs/modules/agents/tools/).
import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";const addTool = new DynamicStructuredTool({ name: "add", description: "Add two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt + secondInt).toString(); },});const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "Multiply two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt * secondInt).toString(); },});const exponentiateTool = new DynamicStructuredTool({ name: "exponentiate", description: "Exponentiate the base to the exponent power.", schema: z.object({ base: z.number(), exponent: z.number(), }), func: async ({ base, exponent }) => { return (base ** exponent).toString(); },});const tools = [addTool, multiplyTool, exponentiateTool];
Create prompt[β](#create-prompt "Direct link to Create prompt")
---------------------------------------------------------------
import { pull } from "langchain/hub";import type { ChatPromptTemplate } from "@langchain/core/prompts";// Get the prompt to use - you can modify this!// You can also see the full prompt at:// https://smith.langchain.com/hub/hwchase17/openai-tools-agentconst prompt = await pull<ChatPromptTemplate>("hwchase17/openai-tools-agent");
Create agent[β](#create-agent "Direct link to Create agent")
------------------------------------------------------------
import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor, createOpenAIToolsAgent } from "langchain/agents";const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});const agent = await createOpenAIToolsAgent({ llm: model, tools, prompt,});const agentExecutor = new AgentExecutor({ agent, tools, verbose: true,});
Invoke agent[β](#invoke-agent "Direct link to Invoke agent")
------------------------------------------------------------
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/fc11cacc-f467-4c39-a46e-8bfefa37b1f9/r)
await agentExecutor.invoke({ input: "Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result",});
> Entering new AgentExecutor chain...Invoking: `exponentiate` with `{'base': 3, 'exponent': 5}`243Invoking: `add` with `{'first_int': 12, 'second_int': 3}`15Invoking: `multiply` with `{'first_int': 243, 'second_int': 15}`3645Invoking: `exponentiate` with `{'base': 3645, 'exponent': 2}`13286025The result of taking 3 to the fifth power and multiplying that by the sum of twelve and three, then squaring the whole result is 13,286,025.> Finished chain.
{ input: 'Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result', output: 'The result of taking 3 to the fifth power and multiplying that by the sum of twelve and three, then squaring the whole result is 13,286,025.'}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/use_cases/tool_use/quickstart/)[
Next
Choosing between multiple tools
](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Setup](#setup)
* [Create tools](#create-tools)
* [Create prompt](#create-prompt)
* [Create agent](#create-agent)
* [Invoke agent](#invoke-agent)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/tool_use/multiple_tools/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/)
* [Agents](/v0.1/docs/use_cases/tool_use/agents/)
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/)
* [Tool error handling](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Human-in-the-loop](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* Choosing between multiple tools
On this page
Choosing between multiple tools
===============================
In the tools [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/) we went over how to build a Chain that calls a single `multiply` tool. Now letβs take a look at how we might augment this chain so that it can pick from a number of tools to call. Weβll focus on Chains since [Agents](/v0.1/docs/use_cases/tool_use/agents/) can route between multiple tools by default.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Weβll use OpenAI for this guide, and will need to install its partner package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named `OPENAI_API_KEY`.
We'll also use the popular validation library [Zod](https://zod.dev) to define our tool schemas. It's already a dependency of `langchain`, but you can install it explicitly like this too:
* npm
* Yarn
* pnpm
npm install zod
yarn add zod
pnpm add zod
Tools[β](#tools "Direct link to Tools")
---------------------------------------
Recall the `multiply` tool from the quickstart:
import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";const addTool = new DynamicStructuredTool({ name: "add", description: "Add two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt + secondInt).toString(); },});
And now let's create `exponentiate` and `add` tools:
const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "Multiply two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt * secondInt).toString(); },});const exponentiateTool = new DynamicStructuredTool({ name: "exponentiate", description: "Exponentiate the base to the exponent power.", schema: z.object({ base: z.number(), exponent: z.number(), }), func: async ({ base, exponent }) => { return (base ** exponent).toString(); },});
The main difference between using one Tool and many is that in the case of many we canβt be sure which Tool the model will invoke. So we cannot hardcode, like we did in the Quickstart, a specific tool into our chain. Instead weβll create a function called `callSelectedTool` that takes the `JsonOutputToolsParser` output and returns the end of the chain based on the chosen tool.
This means that the `callSelectedTool` function appends the Tools that were invoked to the end of the chain at runtime. We can do this because LCEL allows runnables to return runnables themselves, which are then invoked as part of the chain.
import { ChatOpenAI } from "@langchain/openai";import { convertToOpenAITool } from "@langchain/core/utils/function_calling";import { JsonOutputToolsParser } from "langchain/output_parsers";import { RunnableLambda, RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106",});const tools = [multiplyTool, exponentiateTool, addTool];const toolMap: Record<string, any> = { multiply: multiplyTool, exponentiate: exponentiateTool, add: addTool,};const modelWithTools = model.bind({ tools: tools.map(convertToOpenAITool),});// Function for dynamically constructing the end of the chain based on the model-selected tool.const callSelectedTool = RunnableLambda.from( (toolInvocation: Record<string, any>) => { const selectedTool = toolMap[toolInvocation.type]; if (!selectedTool) { throw new Error( `No matching tool available for requested type "${toolInvocation.type}".` ); } const toolCallChain = RunnableSequence.from([ (toolInvocation) => toolInvocation.args, selectedTool, ]); // We use `RunnablePassthrough.assign` here to return the intermediate `toolInvocation` params // as well, but you can omit if you only care about the answer. return RunnablePassthrough.assign({ output: toolCallChain, }); });const chain = RunnableSequence.from([ modelWithTools, new JsonOutputToolsParser(), // .map() allows us to apply a function for each item in a list of inputs. // Required because the model can call multiple tools at once. callSelectedTool.map(),]);
Note the use of `.map()` above - this is required because the model can choose to call multiple tools in parallel. See [this section](/v0.1/docs/use_cases/tool_use/parallel/) for more details.
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/b6a20569-d798-4e99-8a88-7ffaea58e25f/r)
await chain.invoke("What's 23 times 7");
[ { type: 'multiply', args: { firstInt: 23, secondInt: 7 }, output: '161' }]
await chain.invoke("add a million plus a billion");
[ { type: 'add', args: { firstInt: 1000000, secondInt: 1000000000 }, output: '1001000000' }]
await chain.invoke("cube thirty-seven");
[ { type: 'exponentiate', args: { base: 37, exponent: 3 }, output: '50653' }]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Agents
](/v0.1/docs/use_cases/tool_use/agents/)[
Next
Parallel tool use
](/v0.1/docs/use_cases/tool_use/parallel/)
* [Setup](#setup)
* [Tools](#tools)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/tool_use/parallel/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/)
* [Agents](/v0.1/docs/use_cases/tool_use/agents/)
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/)
* [Tool error handling](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Human-in-the-loop](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* Parallel tool use
On this page
Parallel tool use
=================
In the [Chains with multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/) guide we saw how to build function-calling chains that select between multiple tools. Some models, like the OpenAI models released in Fall 2023, also support parallel function calling. This allows you to invoke multiple functions (or the same function multiple times) in a single model call.
Our previous chain from the multiple tools guides actually already supports this, we just need to use an OpenAI model capable of parallel function calling.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Weβll use OpenAI for this guide, and will need to install its partner package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named `OPENAI_API_KEY`.
We'll also use the popular validation library [Zod](https://zod.dev) to define our tool schemas. It's already a dependency of `langchain`, but you can install it explicitly like this too:
* npm
* Yarn
* pnpm
npm install zod
yarn add zod
pnpm add zod
Tools[β](#tools "Direct link to Tools")
---------------------------------------
Tools[β](#tools-1 "Direct link to Tools")
-----------------------------------------
Recall the tools we set up earlier:
import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";const addTool = new DynamicStructuredTool({ name: "add", description: "Add two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt + secondInt).toString(); },});const multiplyTool = new DynamicStructuredTool({ name: "multiply", description: "Multiply two integers together.", schema: z.object({ firstInt: z.number(), secondInt: z.number(), }), func: async ({ firstInt, secondInt }) => { return (firstInt * secondInt).toString(); },});const exponentiateTool = new DynamicStructuredTool({ name: "exponentiate", description: "Exponentiate the base to the exponent power.", schema: z.object({ base: z.number(), exponent: z.number(), }), func: async ({ base, exponent }) => { return (base ** exponent).toString(); },});
Chain[β](#chain "Direct link to Chain")
---------------------------------------
import { ChatOpenAI } from "@langchain/openai";import { convertToOpenAITool } from "@langchain/core/utils/function_calling";import { JsonOutputToolsParser } from "langchain/output_parsers";import { RunnableLambda, RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106",});const tools = [multiplyTool, exponentiateTool, addTool];const toolMap: Record<string, any> = { multiply: multiplyTool, exponentiate: exponentiateTool, add: addTool,};const modelWithTools = model.bind({ tools: tools.map(convertToOpenAITool),});// Function for dynamically constructing the end of the chain based on the model-selected tool.const callSelectedTool = RunnableLambda.from( (toolInvocation: Record<string, any>) => { const selectedTool = toolMap[toolInvocation.type]; if (!selectedTool) { throw new Error( `No matching tool available for requested type "${toolInvocation.type}".` ); } const toolCallChain = RunnableSequence.from([ (toolInvocation) => toolInvocation.args, selectedTool, ]); // We use `RunnablePassthrough.assign` here to return the intermediate `toolInvocation` params // as well, but you can omit if you only care about the answer. return RunnablePassthrough.assign({ output: toolCallChain, }); });const chain = RunnableSequence.from([ modelWithTools, new JsonOutputToolsParser(), // .map() allows us to apply a function for each item in a list of inputs. // Required because the model can call multiple tools at once. callSelectedTool.map(),]);
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/e00678ed-e5bf-4e74-887c-32996486f9cf/r)
await chain.invoke( "What's 23 times 7, and what's five times 18 and add a million plus a billion and cube thirty-seven");
[ { type: 'multiply', args: { firstInt: 23, secondInt: 7 }, output: '161' }, { type: 'multiply', args: { firstInt: 5, secondInt: 18 }, output: '90' }, { type: 'add', args: { firstInt: 1000000, secondInt: 1000000000 }, output: '1001000000' }, { type: 'exponentiate', args: { base: 37, exponent: 3 }, output: '50653' }]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Choosing between multiple tools
](/v0.1/docs/use_cases/tool_use/multiple_tools/)[
Next
Tool error handling
](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Setup](#setup)
* [Tools](#tools)
* [Tools](#tools-1)
* [Chain](#chain)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/tool_use/tool_error_handling/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/)
* [Agents](/v0.1/docs/use_cases/tool_use/agents/)
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/)
* [Tool error handling](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Human-in-the-loop](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* Tool error handling
On this page
Tool error handling
===================
Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that are valid.
We can build error handling into our chains to mitigate these failure modes.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Weβll use OpenAI for this guide, and will need to install its partner package:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named `OPENAI_API_KEY`.
We'll also use the popular validation library [Zod](https://zod.dev) to define our tool schemas. It's already a dependency of `langchain`, but you can install it explicitly like this too:
* npm
* Yarn
* pnpm
npm install zod
yarn add zod
pnpm add zod
Chain[β](#chain "Direct link to Chain")
---------------------------------------
Suppose we have the following (dummy) tool and tool-calling chain. Weβll make our tool intentionally convoluted to try and trip up the model.
import { z } from "zod";import { DynamicStructuredTool } from "@langchain/core/tools";const complexTool = new DynamicStructuredTool({ name: "complex_tool", description: "Do something complex with a complex tool.", schema: z.object({ intArg: z.number(), intArg2: z.number(), dictArg: z.object({ test: z.object({}), }), }), func: async ({ intArg, intArg2, dictArg }) => { // Unused for demo purposes console.log(dictArg); return (intArg * intArg2).toString(); },});import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-3.5-turbo-1106", temperature: 0,});import { convertToOpenAITool } from "@langchain/core/utils/function_calling";const formattedTools = [convertToOpenAITool(complexTool)];const modelWithTools = model.bind({ tools: formattedTools, // We specify tool_choice to enforce that the 'multiply' function is called by the model. tool_choice: { type: "function", function: { name: "complex_tool" }, },});import { JsonOutputKeyToolsParser } from "langchain/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";const chain = RunnableSequence.from([ modelWithTools, new JsonOutputKeyToolsParser({ keyName: "complex_tool", returnSingle: true }), complexTool,]);
We can see that when we try to invoke this chain with certain inputs, the model fails to correctly call the tool (it fails to provide a valid object to the `dictArg` parameter, and instead passes `potato`).
await chain.invoke("use complex tool. the args are 5, 2.1, potato.");
ToolInputParsingException [Error]: Received tool input did not match expected schema at DynamicStructuredTool.call (file:///Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools.js:63:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async RunnableSequence.invoke (file:///Users/jacoblee/langchain/langchainjs/langchain-core/dist/runnables/base.js:818:27) at <anonymous> (/Users/jacoblee/langchain/langchainjs/examples/src/use_cases/tool_use/tool_error_handling_intro.ts:50:3) { output: '{"intArg":5,"floatArg":2.1,"dictArg":"potato"}'}
Fallbacks[β](#fallbacks "Direct link to Fallbacks")
---------------------------------------------------
One way to solve this is to fallback to a better model in the event of a tool invocation error. In this case weβll fall back to an identical chain that uses `gpt-4-1106-preview` instead of `gpt-3.5-turbo`.
const betterModel = new ChatOpenAI({ model: "gpt-4-1106-preview", temperature: 0,}).bind({ tools: formattedTools, // We specify tool_choice to enforce that the 'multiply' function is called by the model. tool_choice: { type: "function", function: { name: "complex_tool" }, },});const betterChain = RunnableSequence.from([ betterModel, new JsonOutputKeyToolsParser({ keyName: "complex_tool", returnSingle: true }), complexTool,]);const chainWithFallback = chain.withFallbacks({ fallbacks: [betterChain],});
info
You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/a1f0cac4-4b59-4d0e-8612-44a2e4b51c75/r)
await chainWithFallback.invoke( "use complex tool. the args are 5, 2.1, potato.");
10.5
Looking at the [Langsmith trace](https://smith.langchain.com/public/a1f0cac4-4b59-4d0e-8612-44a2e4b51c75/r) for this chain run, we can see that the first chain call fails as expected and itβs the fallback that succeeds.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Parallel tool use
](/v0.1/docs/use_cases/tool_use/parallel/)[
Next
Human-in-the-loop
](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Setup](#setup)
* [Chain](#chain)
* [Fallbacks](#fallbacks)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/tool_use/human_in_the_loop/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Quickstart](/v0.1/docs/use_cases/tool_use/quickstart/)
* [Agents](/v0.1/docs/use_cases/tool_use/agents/)
* [Choosing between multiple tools](/v0.1/docs/use_cases/tool_use/multiple_tools/)
* [Parallel tool use](/v0.1/docs/use_cases/tool_use/parallel/)
* [Tool error handling](/v0.1/docs/use_cases/tool_use/tool_error_handling/)
* [Human-in-the-loop](/v0.1/docs/use_cases/tool_use/human_in_the_loop/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* Human-in-the-loop
On this page
Human-in-the-loop
=================
There are certain tools that we donβt trust a model to execute on its own. One thing we can do in such situations is require human approval before the tool is invoked.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Weβll need to install the following packages:
npm install langchain @langchain/core @langchain/openai readline zod
Weβll use `readline` to handle accepting input from the user.
### LangSmith[β](#langsmith "Direct link to LangSmith")
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com/).
Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=YOUR_KEY
Chain[β](#chain "Direct link to Chain")
---------------------------------------
Suppose we have the following (dummy) tools and tool-calling chain:
import { ChatOpenAI } from "@langchain/openai";import { Runnable, RunnableLambda, RunnablePassthrough,} from "@langchain/core/runnables";import { StructuredTool } from "@langchain/core/tools";import { JsonOutputToolsParser } from "langchain/output_parsers";import { z } from "zod";class CountEmails extends StructuredTool { schema = z.object({ lastNDays: z.number(), }); name = "count_emails"; description = "Count the number of emails sent in the last N days."; async _call(input: z.infer<typeof this.schema>): Promise<string> { return (input.lastNDays * 2).toString(); }}class SendEmail extends StructuredTool { schema = z.object({ message: z.string(), recipient: z.string(), }); name = "send_email"; description = "Send an email."; async _call(input: z.infer<typeof this.schema>): Promise<string> { return `Successfully sent email to ${input.recipient}`; }}const tools = [new CountEmails(), new SendEmail()];
const model = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,}).bind({ tools,});/** * Function for dynamically constructing the end of the chain based on the model-selected tool. */const callTool = (toolInvocation: Record<string, any>): Runnable => { const toolMap: Record<string, StructuredTool> = tools.reduce((acc, tool) => { acc[tool.name] = tool; return acc; }, {}); const tool = toolMap[toolInvocation.type]; return RunnablePassthrough.assign({ output: (input, config) => tool.invoke(input.args, config), });};
// .map() allows us to apply a function to a list of inputs.const callToolList = new RunnableLambda({ func: callTool }).map();const chain = model.pipe(new JsonOutputToolsParser()).pipe(callToolList);
await chain.invoke("How many emails did I get in the last 5 days?");
[ { type: "count_emails", args: { lastNDays: 5 }, output: "10" } ]
Adding human approval[β](#adding-human-approval "Direct link to Adding human approval")
---------------------------------------------------------------------------------------
We can add a simple human approval step to our `toolChain` function:
import * as readline from "readline";import { JsonOutputToolsParser } from "langchain/output_parsers";import { callToolList, model } from "./helpers.js";// Use readline to ask the user for approvalfunction askQuestion(question: string): Promise<string> { const rl = readline.createInterface({ input: process.stdin, output: process.stdout, }); return new Promise((resolve) => { rl.question(question, (answer) => { rl.close(); resolve(answer); }); });}async function humanApproval(toolInvocations: any[]): Promise<any[]> { const toolStrs = toolInvocations .map((toolCall) => JSON.stringify(toolCall, null, 2)) .join("\n\n"); const msg = `Do you approve of the following tool invocations\n\n${toolStrs}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.\n`; // Ask the user for approval const resp = await askQuestion(msg); if (!["yes", "y"].includes(resp.toLowerCase())) { throw new Error(`Tool invocations not approved:\n\n${toolStrs}`); } return toolInvocations;}const chain = model .pipe(new JsonOutputToolsParser()) .pipe(humanApproval) .pipe(callToolList);const response = await chain.invoke( "how many emails did i get in the last 5 days?");console.log(response);/**Do you approve of the following tool invocations{ "type": "count_emails", "args": { "lastNDays": 5 }}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.y[ { type: 'count_emails', args: { lastNDays: 5 }, output: '10' } ] */const response2 = await chain.invoke( "Send sally@gmail.com an email saying 'What's up homie'");console.log(response2);/**Do you approve of the following tool invocations{ "type": "send_email", "args": { "message": "What's up homie", "recipient": "sally@gmail.com" }}Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.y[ { type: 'send_email', args: { message: "What's up homie", recipient: 'sally@gmail.com' }, output: 'Successfully sent email to sally@gmail.com' }] */
#### API Reference:
* [JsonOutputToolsParser](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputToolsParser.html) from `langchain/output_parsers`
> #### Examine the LangSmith traces from the code above [here](https://smith.langchain.com/public/aac711ff-b1a1-4fd7-a298-0f20909259b6/r) and [here](https://smith.langchain.com/public/7b35ee77-b369-4b95-af4f-b83510f9a93b/r).[β](#examine-the-langsmith-traces-from-the-code-above-here-and-here. "Direct link to examine-the-langsmith-traces-from-the-code-above-here-and-here.")
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Tool error handling
](/v0.1/docs/use_cases/tool_use/tool_error_handling/)[
Next
Interacting with APIs
](/v0.1/docs/use_cases/api/)
* [Setup](#setup)
* [LangSmith](#langsmith)
* [Chain](#chain)
* [Adding human approval](#adding-human-approval)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/chains/popular/sqlite/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
On this page
SQL
===
This example demonstrates the use of `Runnables` with questions and more on a SQL database.
This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc.
info
Looking for the older, non-LCEL version? Click [here](/v0.1/docs/modules/chains/popular/sqlite_legacy/).
Set up[β](#set-up "Direct link to Set up")
------------------------------------------
First install `typeorm`:
* npm
* Yarn
* pnpm
npm install typeorm
yarn add typeorm
pnpm add typeorm
Then, install the dependencies needed for your database. For example, for SQLite:
* npm
* Yarn
* pnpm
npm install sqlite3
yarn add sqlite3
pnpm add sqlite3
LangChain offers default prompts for: default SQL, Postgres, SQLite, Microsoft SQL Server, MySQL, and SAP HANA.
Finally follow the instructions on [https://database.guide/2-sample-databases-sqlite/](https://database.guide/2-sample-databases-sqlite/) to get the sample database for this example.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { DataSource } from "typeorm";import { SqlDatabase } from "langchain/sql_db";import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";/** * This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */const datasource = new DataSource({ type: "sqlite", database: "Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI();/** * Create the first prompt template used for getting the SQL query. */const prompt = PromptTemplate.fromTemplate(`Based on the provided SQL table schema below, write a SQL query that would answer the user's question.------------SCHEMA: {schema}------------QUESTION: {question}------------SQL QUERY:`);/** * You can also load a default prompt by importing from "langchain/sql_db" * * import { * DEFAULT_SQL_DATABASE_PROMPT * SQL_POSTGRES_PROMPT * SQL_SQLITE_PROMPT * SQL_MSSQL_PROMPT * SQL_MYSQL_PROMPT * SQL_SAP_HANA_PROMPT * } from "langchain/sql_db"; * *//** * Create a new RunnableSequence where we pipe the output from `db.getTableInfo()` * and the users question, into the prompt template, and then into the llm. * We're also applying a stop condition to the llm, so that it stops when it * sees the `\nSQLResult:` token. */const sqlQueryChain = RunnableSequence.from([ { schema: async () => db.getTableInfo(), question: (input: { question: string }) => input.question, }, prompt, llm.bind({ stop: ["\nSQLResult:"] }), new StringOutputParser(),]);const res = await sqlQueryChain.invoke({ question: "How many employees are there?",});console.log({ res });/** * { res: 'SELECT COUNT(*) FROM tracks;' } *//** * Create the final prompt template which is tasked with getting the natural language response. */const finalResponsePrompt = PromptTemplate.fromTemplate(`Based on the table schema below, question, SQL query, and SQL response, write a natural language response:------------SCHEMA: {schema}------------QUESTION: {question}------------SQL QUERY: {query}------------SQL RESPONSE: {response}------------NATURAL LANGUAGE RESPONSE:`);/** * Create a new RunnableSequence where we pipe the output from the previous chain, the users question, * and the SQL query, into the prompt template, and then into the llm. * Using the result from the `sqlQueryChain` we can run the SQL query via `db.run(input.query)`. */const finalChain = RunnableSequence.from([ { question: (input) => input.question, query: sqlQueryChain, }, { schema: async () => db.getTableInfo(), question: (input) => input.question, query: (input) => input.query, response: (input) => db.run(input.query), }, finalResponsePrompt, llm, new StringOutputParser(),]);const finalResponse = await finalChain.invoke({ question: "How many employees are there?",});console.log({ finalResponse });/** * { finalResponse: 'There are 8 employees.' } */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
You can include or exclude tables when creating the `SqlDatabase` object to help the chain focus on the tables you want. It can also reduce the number of tokens used in the chain.
const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, includesTables: ["Track"],});
If desired, you can return the used SQL command when calling the chain.
import { DataSource } from "typeorm";import { SqlDatabase } from "langchain/sql_db";import { ChatOpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";/** * This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */const datasource = new DataSource({ type: "sqlite", database: "Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const llm = new ChatOpenAI();/** * Create the first prompt template used for getting the SQL query. */const prompt = PromptTemplate.fromTemplate(`Based on the provided SQL table schema below, write a SQL query that would answer the user's question.------------SCHEMA: {schema}------------QUESTION: {question}------------SQL QUERY:`);/** * Create a new RunnableSequence where we pipe the output from `db.getTableInfo()` * and the users question, into the prompt template, and then into the llm. * We're also applying a stop condition to the llm, so that it stops when it * sees the `\nSQLResult:` token. */const sqlQueryChain = RunnableSequence.from([ { schema: async () => db.getTableInfo(), question: (input: { question: string }) => input.question, }, prompt, llm.bind({ stop: ["\nSQLResult:"] }), new StringOutputParser(),]);/** * Create the final prompt template which is tasked with getting the natural * language response to the SQL query. */const finalResponsePrompt = PromptTemplate.fromTemplate(`Based on the table schema below, question, SQL query, and SQL response, write a natural language response:------------SCHEMA: {schema}------------QUESTION: {question}------------SQL QUERY: {query}------------SQL RESPONSE: {response}------------NATURAL LANGUAGE RESPONSE:`);/** * Create a new RunnableSequence where we pipe the output from the previous chain, the users question, * and the SQL query, into the prompt template, and then into the llm. * Using the result from the `sqlQueryChain` we can run the SQL query via `db.run(input.query)`. * * Lastly we're piping the result of the first chain (the outputted SQL query) so it is * logged along with the natural language response. */const finalChain = RunnableSequence.from([ { question: (input) => input.question, query: sqlQueryChain, }, { schema: async () => db.getTableInfo(), question: (input) => input.question, query: (input) => input.query, response: (input) => db.run(input.query), }, { result: finalResponsePrompt.pipe(llm).pipe(new StringOutputParser()), // Pipe the query through here unchanged so it gets logged alongside the result. sql: (previousStepResult) => previousStepResult.query, },]);const finalResponse = await finalChain.invoke({ question: "How many employees are there?",});console.log({ finalResponse });/** * { * finalResponse: { * result: 'There are 8 employees.', * sql: 'SELECT COUNT(*) FROM tracks;' * } * } */
#### API Reference:
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [StringOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
Disclaimer β οΈ
=============
The query chain may generate insert/update/delete queries. When this is not expected, use a custom prompt or create SQL users without write permissions.
The final user might overload your SQL database by asking a simple question such as "run the biggest query possible". The generated query might look like:
SELECT * FROM "public"."users" JOIN "public"."user_permissions" ON "public"."users".id = "public"."user_permissions".user_id JOIN "public"."projects" ON "public"."users".id = "public"."projects".user_id JOIN "public"."events" ON "public"."projects".id = "public"."events".project_id;
For a transactional SQL database, if one of the table above contains millions of rows, the query might cause trouble to other applications using the same database.
Most datawarehouse oriented databases support user-level quota, for limiting resource usage.
* * *
#### Help us out by providing feedback on this documentation page:
* [Set up](#set-up)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/toolkits/sql/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Connery Toolkit](/v0.1/docs/integrations/toolkits/connery/)
* [JSON Agent Toolkit](/v0.1/docs/integrations/toolkits/json/)
* [OpenAPI Agent Toolkit](/v0.1/docs/integrations/toolkits/openapi/)
* [AWS Step Functions Toolkit](/v0.1/docs/integrations/toolkits/sfn_agent/)
* [SQL Agent Toolkit](/v0.1/docs/integrations/toolkits/sql/)
* [VectorStore Agent Toolkit](/v0.1/docs/integrations/toolkits/vectorstore/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* SQL Agent Toolkit
SQL Agent Toolkit
=================
This example shows how to load and use an agent with a SQL toolkit.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You'll need to first install `typeorm`:
* npm
* Yarn
* pnpm
npm install typeorm
yarn add typeorm
pnpm add typeorm
Usage[β](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { SqlDatabase } from "langchain/sql_db";import { createSqlAgent, SqlToolkit } from "langchain/agents/toolkits/sql";import { DataSource } from "typeorm";/** This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */export const run = async () => { const datasource = new DataSource({ type: "sqlite", database: "Chinook.db", }); const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, }); const model = new OpenAI({ temperature: 0 }); const toolkit = new SqlToolkit(db, model); const executor = createSqlAgent(model, toolkit); const input = `List the total sales per country. Which country's customers spent the most?`; console.log(`Executing with input "${input}"...`); const result = await executor.invoke({ input }); console.log(`Got output ${result.output}`); console.log( `Got intermediate steps ${JSON.stringify( result.intermediateSteps, null, 2 )}` ); await datasource.destroy();};
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [createSqlAgent](https://api.js.langchain.com/functions/langchain_agents_toolkits_sql.createSqlAgent.html) from `langchain/agents/toolkits/sql`
* [SqlToolkit](https://api.js.langchain.com/classes/langchain_agents_toolkits_sql.SqlToolkit.html) from `langchain/agents/toolkits/sql`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
AWS Step Functions Toolkit
](/v0.1/docs/integrations/toolkits/sfn_agent/)[
Next
VectorStore Agent Toolkit
](/v0.1/docs/integrations/toolkits/vectorstore/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/summarization/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* Summarization
Summarization
=============
A common use case is wanting to summarize long documents. This naturally runs into the context window limitations. Unlike in question-answering, you can't just do some semantic search hacks to only select the chunks of text most relevant to the question (because, in this case, there is no particular question - you want to summarize everything). So what do you do then?
To get started, we would recommend checking out the summarization chain, which attacks this problem in a recursive manner.
* [Summarization Chain](/v0.1/docs/modules/chains/popular/summarize/)
Example[β](#example "Direct link to Example")
---------------------------------------------
Here's an example of how you can use the [RefineDocumentsChain](/v0.1/docs/modules/chains/document/refine/) to summarize documents loaded from a YouTube video:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { loadSummarizationChain } from "langchain/chains";import { SearchApiLoader } from "langchain/document_loaders/web/searchapi";import { TokenTextSplitter } from "langchain/text_splitter";import { PromptTemplate } from "@langchain/core/prompts";import { ChatAnthropic } from "@langchain/anthropic";const loader = new SearchApiLoader({ engine: "youtube_transcripts", video_id: "WTOm65IZneg",});const docs = await loader.load();const splitter = new TokenTextSplitter({ chunkSize: 10000, chunkOverlap: 250,});const docsSummary = await splitter.splitDocuments(docs);const llmSummary = new ChatAnthropic({ model: "claude-3-sonnet-20240229", temperature: 0.3,});const summaryTemplate = `You are an expert in summarizing YouTube videos.Your goal is to create a summary of a podcast.Below you find the transcript of a podcast:--------{text}--------The transcript of the podcast will also be used as the basis for a question and answer bot.Provide some examples questions and answers that could be asked about the podcast. Make these questions very specific.Total output will be a summary of the video and a list of example questions the user could ask of the video.SUMMARY AND QUESTIONS:`;const SUMMARY_PROMPT = PromptTemplate.fromTemplate(summaryTemplate);const summaryRefineTemplate = `You are an expert in summarizing YouTube videos.Your goal is to create a summary of a podcast.We have provided an existing summary up to a certain point: {existing_answer}Below you find the transcript of a podcast:--------{text}--------Given the new context, refine the summary and example questions.The transcript of the podcast will also be used as the basis for a question and answer bot.Provide some examples questions and answers that could be asked about the podcast. Makethese questions very specific.If the context isn't useful, return the original summary and questions.Total output will be a summary of the video and a list of example questions the user could ask of the video.SUMMARY AND QUESTIONS:`;const SUMMARY_REFINE_PROMPT = PromptTemplate.fromTemplate( summaryRefineTemplate);const summarizeChain = loadSummarizationChain(llmSummary, { type: "refine", verbose: true, questionPrompt: SUMMARY_PROMPT, refinePrompt: SUMMARY_REFINE_PROMPT,});const summary = await summarizeChain.run(docsSummary);console.log(summary);/* Here is a summary of the key points from the podcast transcript: - Jimmy helps provide hearing aids and cochlear implants to deaf and hard-of-hearing people who can't afford them. He helps over 1,000 people hear again. - Jimmy surprises recipients with $10,000 cash gifts in addition to the hearing aids. He also gifts things like jet skis, basketball game tickets, and trips to concerts. - Jimmy travels internationally to provide hearing aids, visiting places like Mexico, Guatemala, Brazil, South Africa, Malawi, and Indonesia. - Jimmy donates $100,000 to organizations around the world that teach sign language. - The recipients are very emotional and grateful to be able to hear their loved ones again. Here are some example questions and answers about the podcast: Q: How many people did Jimmy help regain their hearing? A: Jimmy helped over 1,000 people regain their hearing. Q: What types of hearing devices did Jimmy provide to the recipients? A: Jimmy provided cutting-edge hearing aids and cochlear implants. Q: In addition to the hearing devices, what surprise gifts did Jimmy give some recipients? A: In addition to hearing devices, Jimmy surprised some recipients with $10,000 cash gifts, jet skis, basketball game tickets, and concert tickets. Q: What countries did Jimmy travel to in order to help people? A: Jimmy traveled to places like Mexico, Guatemala, Brazil, South Africa, Malawi, and Indonesia. Q: How much money did Jimmy donate to organizations that teach sign language? A: Jimmy donated $100,000 to sign language organizations around the world. Q: How did the recipients react when they were able to hear again? A: The recipients were very emotional and grateful, with many crying tears of joy at being able to hear their loved ones again.*/
#### API Reference:
* [loadSummarizationChain](https://api.js.langchain.com/functions/langchain_chains.loadSummarizationChain.html) from `langchain/chains`
* [SearchApiLoader](https://api.js.langchain.com/classes/langchain_document_loaders_web_searchapi.SearchApiLoader.html) from `langchain/document_loaders/web/searchapi`
* [TokenTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.TokenTextSplitter.html) from `langchain/text_splitter`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Prompting strategies
](/v0.1/docs/use_cases/graph/prompting/)[
Next
Agent Simulations
](/v0.1/docs/use_cases/agent_simulations/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/graph/quickstart/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Quickstart](/v0.1/docs/use_cases/graph/quickstart/)
* [Constructing knowledge graphs](/v0.1/docs/use_cases/graph/construction/)
* [Mapping values to database](/v0.1/docs/use_cases/graph/mapping/)
* [Semantic layer over graph database](/v0.1/docs/use_cases/graph/semantic/)
* [Prompting strategies](/v0.1/docs/use_cases/graph/prompting/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* Quickstart
On this page
Quickstart
==========
In this guide weβll go over the basic ways to create a Q&A chain over a graph database. These systems will allow us to ask a question about the data in a graph database and get back a natural language answer.
β οΈ Security note β οΈ[β](#security-note "Direct link to β οΈ Security note β οΈ")
---------------------------------------------------------------------------
Building Q&A systems of graph databases requires executing model-generated graph queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your chain/agentβs needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, [see here](/v0.1/docs/security/).
Architecture[β](#architecture "Direct link to Architecture")
------------------------------------------------------------
At a high-level, the steps of most graph chains are:
1. **Convert question to a graph database query**: Model converts user input to a graph database query (e.g.Β Cypher).
2. **Execute graph database query**: Execute the graph database query.
3. **Answer the question**: Model responds to user input using the query results.
![SQL Use Case Diagram](/v0.1/assets/images/graph_usecase-34d891523e6284bb6230b38c5f8392e5.png)
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, get required packages and set environment variables. In this example, we will be using Neo4j graph database.
Setup[β](#setup-1 "Direct link to Setup")
-----------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver
yarn add langchain @langchain/community @langchain/openai neo4j-driver
pnpm add langchain @langchain/community @langchain/openai neo4j-driver
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
Weβll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Graph schema[β](#graph-schema "Direct link to Graph schema")
------------------------------------------------------------
In order for an LLM to be able to generate a Cypher statement, it needs information about the graph schema. When you instantiate a graph object, it retrieves the information about the graph schema. If you later make any changes to the graph, you can run the `refreshSchema` method to refresh the schema information.
await graph.refreshSchema();console.log(graph.schema);
Node properties are the following:Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING}, Person {name: STRING}, Genre {name: STRING}Relationship properties are the following:The relationships are the following:(:Movie)-[:IN_GENRE]->(:Genre), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:ACTED_IN]->(:Movie)
Great! Weβve got a graph database that we can query. Now letβs try hooking it up to an LLM.
Chain[β](#chain "Direct link to Chain")
---------------------------------------
Letβs use a simple chain that takes a question, turns it into a Cypher query, executes the query, and uses the result to answer the original question.
![graph_chain.webp](/v0.1/assets/images/graph_chain-6379941793e0fa985e51e4bda0329403.webp)
LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](https://python.langchain.com/docs/use_cases/graph/graph_cypher_qa)
import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";import { ChatOpenAI } from "@langchain/openai";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const chain = GraphCypherQAChain.fromLLM({ llm, graph,});const response = await chain.invoke({ query: "What was the cast of the Casino?",});response;
{ result: "James Woods, Joe Pesci, Robert De Niro, Sharon Stone" }
### Next steps[β](#next-steps "Direct link to Next steps")
For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:
* [Prompting strategies](/v0.1/docs/use_cases/graph/prompting/): Advanced prompt engineering techniques.
* [Mapping values](/v0.1/docs/use_cases/graph/mapping/): Techniques for mapping values from questions to database.
* [Semantic layer](/v0.1/docs/use_cases/graph/semantic/): Techniques for working implementing semantic layers.
* [Constructing graphs](/v0.1/docs/use_cases/graph/construction/): Techniques for constructing knowledge graphs.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Graphs
](/v0.1/docs/use_cases/graph/)[
Next
Constructing knowledge graphs
](/v0.1/docs/use_cases/graph/construction/)
* [β οΈ Security note β οΈ](#security-note)
* [Architecture](#architecture)
* [Setup](#setup)
* [Setup](#setup-1)
* [Graph schema](#graph-schema)
* [Chain](#chain)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/graph/construction/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Quickstart](/v0.1/docs/use_cases/graph/quickstart/)
* [Constructing knowledge graphs](/v0.1/docs/use_cases/graph/construction/)
* [Mapping values to database](/v0.1/docs/use_cases/graph/mapping/)
* [Semantic layer over graph database](/v0.1/docs/use_cases/graph/semantic/)
* [Prompting strategies](/v0.1/docs/use_cases/graph/prompting/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* Constructing knowledge graphs
On this page
Constructing knowledge graphs
=============================
In this guide weβll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application. At a high-level, the steps of constructing a knowledge are from text are:
1. Extracting structured information from text: Model is used to extract structured graph information from text.
2. Storing into graph database: Storing the extracted structured graph information into a graph database enables downstream RAG applications
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver zod
yarn add langchain @langchain/community @langchain/openai neo4j-driver zod
pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
Weβll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });
LLM Graph Transformer[β](#llm-graph-transformer "Direct link to LLM Graph Transformer")
---------------------------------------------------------------------------------------
Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The LLMGraphTransformer converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data.
import { ChatOpenAI } from "@langchain/openai";import { LLMGraphTransformer } from "@langchain/community/experimental/graph_transformers/llm";const model = new ChatOpenAI({ temperature: 0, model: "gpt-4-turbo-preview",});const llmGraphTransformer = new LLMGraphTransformer({ llm: model,});
Now we can pass in example text and examine the results.
import { Document } from "@langchain/core/documents";let text = `Marie Curie, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.She was, in 1906, the first woman to become a professor at the University of Paris.`;const result = await llmGraphTransformer.convertToGraphDocuments([ new Document({ pageContent: text }),]);console.log(`Nodes: ${result[0].nodes.length}`);console.log(`Relationships:${result[0].relationships.length}`);
Nodes: 8Relationships:7
Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution. Examine the following image to better grasp the structure of the generated knowledge graph.
![graph_construction1.png](/v0.1/assets/images/graph_construction1-2b4d31978d58696d5a6a52ad92ae088f.png)
Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements.
const llmGraphTransformerFiltered = new LLMGraphTransformer({ llm: model, allowedNodes: ["PERSON", "COUNTRY", "ORGANIZATION"], allowedRelationships: ["NATIONALITY", "LOCATED_IN", "WORKED_AT", "SPOUSE"], strictMode: false,});const result_filtered = await llmGraphTransformerFiltered.convertToGraphDocuments([ new Document({ pageContent: text }), ]);console.log(`Nodes: ${result_filtered[0].nodes.length}`);console.log(`Relationships:${result_filtered[0].relationships.length}`);
Nodes: 6Relationships:4
For a better understanding of the generated graph, we can again visualize it.
![graph_construction1.png](/v0.1/assets/images/graph_construction2-8b43506ae0fb3a006eaa4ba83fea8af5.png)
Storing to graph database[β](#storing-to-graph-database "Direct link to Storing to graph database")
---------------------------------------------------------------------------------------------------
The generated graph documents can be stored to a graph database using the `addGraphDocuments` method.
await graph.addGraphDocuments(result_filtered);
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Quickstart
](/v0.1/docs/use_cases/graph/quickstart/)[
Next
Mapping values to database
](/v0.1/docs/use_cases/graph/mapping/)
* [Setup](#setup)
* [LLM Graph Transformer](#llm-graph-transformer)
* [Storing to graph database](#storing-to-graph-database)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/graph/mapping/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Quickstart](/v0.1/docs/use_cases/graph/quickstart/)
* [Constructing knowledge graphs](/v0.1/docs/use_cases/graph/construction/)
* [Mapping values to database](/v0.1/docs/use_cases/graph/mapping/)
* [Semantic layer over graph database](/v0.1/docs/use_cases/graph/semantic/)
* [Prompting strategies](/v0.1/docs/use_cases/graph/prompting/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* Mapping values to database
On this page
Mapping values to database
==========================
In this guide weβll go over strategies to improve graph database query generation by mapping values from user inputs to database. When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database. Therefore, we can introduce a new step in graph database QA system to accurately map values.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver zod
yarn add langchain @langchain/community @langchain/openai neo4j-driver zod
pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
Weβll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Detecting entities in the user input[β](#detecting-entities-in-the-user-input "Direct link to Detecting entities in the user input")
------------------------------------------------------------------------------------------------------------------------------------
We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database.
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";import { z } from "zod";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const entities = z .object({ names: z .array(z.string()) .describe("All the person or movies appearing in the text"), }) .describe("Identifying information about entities.");const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are extracting person and movies from the text."], [ "human", "Use the given format to extract information from the following\ninput: {question}", ],]);const entityChain = prompt.pipe(llm.withStructuredOutput(entities));
We can test the entity extraction chain.
const entities = await entityChain.invoke({ question: "Who played in Casino movie?",});entities;
{ names: [ "Casino" ] }
We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings.
const matchQuery = `MATCH (p:Person|Movie)WHERE p.name CONTAINS $value OR p.title CONTAINS $valueRETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS typeLIMIT 1`;const matchToDatabase = async (values) => { let result = ""; for (const entity of values.names) { const response = await graph.query(matchQuery, { value: entity, }); if (response.length > 0) { result += `${entity} maps to ${response[0]["result"]} ${response[0]["type"]} in database\n`; } } return result;};await matchToDatabase(entities);
"Casino maps to Casino Movie in database\n"
Custom Cypher generating chain[β](#custom-cypher-generating-chain "Direct link to Custom Cypher generating chain")
------------------------------------------------------------------------------------------------------------------
We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement. We will be using the LangChain expression language to accomplish that.
import { StringOutputParser } from "@langchain/core/output_parsers";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";// Generate Cypher statement based on natural language inputconst cypherTemplate = `Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:{schema}Entities in the question map to the following database values:{entities_list}Question: {question}Cypher query:`;const cypherPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Given an input question, convert it to a Cypher query. No pre-amble.", ], ["human", cypherTemplate],]);const llmWithStop = llm.bind({ stop: ["\nCypherResult:"] });const cypherResponse = RunnableSequence.from([ RunnablePassthrough.assign({ names: entityChain }), RunnablePassthrough.assign({ entities_list: async (x) => matchToDatabase(x.names), schema: async (_) => graph.getSchema(), }), cypherPrompt, llmWithStop, new StringOutputParser(),]);
const cypher = await cypherResponse.invoke({ question: "Who played in Casino movie?",});cypher;
'MATCH (:Movie {title: "Casino"})<-[:ACTED_IN]-(actor)\nRETURN actor.name'
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Constructing knowledge graphs
](/v0.1/docs/use_cases/graph/construction/)[
Next
Semantic layer over graph database
](/v0.1/docs/use_cases/graph/semantic/)
* [Setup](#setup)
* [Detecting entities in the user input](#detecting-entities-in-the-user-input)
* [Custom Cypher generating chain](#custom-cypher-generating-chain)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/graph/semantic/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Quickstart](/v0.1/docs/use_cases/graph/quickstart/)
* [Constructing knowledge graphs](/v0.1/docs/use_cases/graph/construction/)
* [Mapping values to database](/v0.1/docs/use_cases/graph/mapping/)
* [Semantic layer over graph database](/v0.1/docs/use_cases/graph/semantic/)
* [Prompting strategies](/v0.1/docs/use_cases/graph/prompting/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* Semantic layer over graph database
On this page
Semantic layer over graph database
==================================
You can use database queries to retrieve information from a graph database like Neo4j. One option is to use LLMs to generate Cypher statements. While that option provides excellent flexibility, the solution could be brittle and not consistently generating precise Cypher statements. Instead of generating Cypher statements, we can implement Cypher templates as tools in a semantic layer that an LLM agent can interact with.
![graph_semantic.png](/v0.1/assets/images/graph_semantic-365248d76b7862193c33f44eaa6ecaeb.png)
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver zod
yarn add langchain @langchain/community @langchain/openai neo4j-driver zod
pnpm add langchain @langchain/community @langchain/openai neo4j-driver zod
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
Weβll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Custom tools with Cypher templates[β](#custom-tools-with-cypher-templates "Direct link to Custom tools with Cypher templates")
------------------------------------------------------------------------------------------------------------------------------
A semantic layer consists of various tools exposed to an LLM that it can use to interact with a knowledge graph. They can be of various complexity. You can think of each tool in a semantic layer as a function.
The function we will implement is to retrieve information about movies or their cast.
const descriptionQuery = `MATCH (m:Movie|Person)WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidateMATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as namesWITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as typesWITH m, collect(types) as contextsWITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name) + "\nyear: "+coalesce(m.released,"") +"\n" + reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as contextRETURN context LIMIT 1`;const getInformation = async (entity: string) => { try { const data = await graph.query(descriptionQuery, { candidate: entity }); return data[0]["context"]; } catch (error) { return "No information was found"; }};
You can observe that we have defined the Cypher statement used to retrieve information. Therefore, we can avoid generating Cypher statements and use the LLM agent to only populate the input parameters. To provide additional information to an LLM agent about when to use the tool and their input parameters, we wrap the function as a tool.
import { StructuredTool } from "@langchain/core/tools";import { z } from "zod";const informationInput = z.object({ entity: z.string().describe("movie or a person mentioned in the question"),});class InformationTool extends StructuredTool { schema = informationInput; name = "Information"; description = "useful for when you need to answer questions about various actors or movies"; async _call(input: z.infer<typeof informationInput>): Promise<string> { return getInformation(input.entity); }}
OpenAI Agent[β](#openai-agent "Direct link to OpenAI Agent")
------------------------------------------------------------
LangChain expression language makes it very convenient to define an agent to interact with a graph database over the semantic layer.
import { ChatOpenAI } from "@langchain/openai";import { AgentExecutor } from "langchain/agents";import { formatToOpenAIFunctionMessages } from "langchain/agents/format_scratchpad";import { OpenAIFunctionsAgentOutputParser } from "langchain/agents/openai/output_parser";import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";import { RunnableSequence } from "@langchain/core/runnables";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });const tools = [new InformationTool()];const llmWithTools = llm.bind({ functions: tools.map(convertToOpenAIFunction),});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are a helpful assistant that finds information about movies and recommends them. If tools require follow up questions, make sure to ask the user for clarification. Make sure to include any available options that need to be clarified in the follow up questions Do only the things the user specifically requested.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], new MessagesPlaceholder("agent_scratchpad"),]);const _formatChatHistory = (chatHistory) => { const buffer: Array<BaseMessage> = []; for (const [human, ai] of chatHistory) { buffer.push(new HumanMessage({ content: human })); buffer.push(new AIMessage({ content: ai })); } return buffer;};const agent = RunnableSequence.from([ { input: (x) => x.input, chat_history: (x) => { if ("chat_history" in x) { return _formatChatHistory(x.chat_history); } return []; }, agent_scratchpad: (x) => { if ("steps" in x) { return formatToOpenAIFunctionMessages(x.steps); } return []; }, }, prompt, llmWithTools, new OpenAIFunctionsAgentOutputParser(),]);const agentExecutor = new AgentExecutor({ agent, tools });
await agentExecutor.invoke({ input: "Who played in Casino?" });
{ input: "Who played in Casino?", output: 'The movie "Casino" starred James Woods, Joe Pesci, Robert De Niro, and Sharon Stone.'}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Mapping values to database
](/v0.1/docs/use_cases/graph/mapping/)[
Next
Prompting strategies
](/v0.1/docs/use_cases/graph/prompting/)
* [Setup](#setup)
* [Custom tools with Cypher templates](#custom-tools-with-cypher-templates)
* [OpenAI Agent](#openai-agent)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/graph/prompting/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Quickstart](/v0.1/docs/use_cases/graph/quickstart/)
* [Constructing knowledge graphs](/v0.1/docs/use_cases/graph/construction/)
* [Mapping values to database](/v0.1/docs/use_cases/graph/mapping/)
* [Semantic layer over graph database](/v0.1/docs/use_cases/graph/semantic/)
* [Prompting strategies](/v0.1/docs/use_cases/graph/prompting/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* Prompting strategies
On this page
Prompting strategies
====================
In this guide weβll go over prompting strategies to improve graph database query generation. Weβll largely focus on methods for getting relevant database-specific information in your prompt.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
#### Install dependencies[β](#install-dependencies "Direct link to Install dependencies")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* yarn
* pnpm
npm i langchain @langchain/community @langchain/openai neo4j-driver
yarn add langchain @langchain/community @langchain/openai neo4j-driver
pnpm add langchain @langchain/community @langchain/openai neo4j-driver
#### Set environment variables[β](#set-environment-variables "Direct link to Set environment variables")
Weβll use OpenAI in this example:
OPENAI_API_KEY=your-api-key# Optional, use LangSmith for best-in-class observabilityLANGSMITH_API_KEY=your-api-keyLANGCHAIN_TRACING_V2=true
Next, we need to define Neo4j credentials. Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database.
NEO4J_URI="bolt://localhost:7687"NEO4J_USERNAME="neo4j"NEO4J_PASSWORD="password"
The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors.
const url = Deno.env.get("NEO4J_URI");const username = Deno.env.get("NEO4J_USER");const password = Deno.env.get("NEO4J_PASSWORD");
import "neo4j-driver";import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";const graph = await Neo4jGraph.initialize({ url, username, password });// Import movie informationconst moviesQuery = `LOAD CSV WITH HEADERS FROM 'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'AS rowMERGE (m:Movie {id:row.movieId})SET m.released = date(row.released), m.title = row.title, m.imdbRating = toFloat(row.imdbRating)FOREACH (director in split(row.director, '|') | MERGE (p:Person {name:trim(director)}) MERGE (p)-[:DIRECTED]->(m))FOREACH (actor in split(row.actors, '|') | MERGE (p:Person {name:trim(actor)}) MERGE (p)-[:ACTED_IN]->(m))FOREACH (genre in split(row.genres, '|') | MERGE (g:Genre {name:trim(genre)}) MERGE (m)-[:IN_GENRE]->(g))`;await graph.query(moviesQuery);
Schema refreshed successfully.
[]
Filtering graph schema
======================
At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements. Letβs say we are dealing with the following graph schema:
await graph.refreshSchema();console.log(graph.schema);
Node properties are the following:Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING}, Person {name: STRING}, Genre {name: STRING}, Chunk {embedding: LIST, id: STRING, text: STRING}Relationship properties are the following:The relationships are the following:(:Movie)-[:IN_GENRE]->(:Genre), (:Person)-[:DIRECTED]->(:Movie), (:Person)-[:ACTED_IN]->(:Movie)
Few-shot examples[β](#few-shot-examples "Direct link to Few-shot examples")
---------------------------------------------------------------------------
Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries.
Letβs say we have the following examples:
const examples = [ { question: "How many artists are there?", query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", }, { question: "Which actors played in the movie Casino?", query: "MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name", }, { question: "How many movies has Tom Hanks acted in?", query: "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", }, { question: "List all the genres of the movie Schindler's List", query: "MATCH (m:Movie {{title: 'Schindler\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name", }, { question: "Which actors have worked in movies from both the comedy and action genres?", query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name", }, { question: "Which directors have made movies with at least three different actors named 'John'?", query: "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name", }, { question: "Identify movies where directors also played a role in the film.", query: "MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name", }, { question: "Find the actor with the highest number of movies in the database.", query: "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1", },];
We can create a few-shot prompt with them like so:
import { FewShotPromptTemplate, PromptTemplate } from "@langchain/core/prompts";const examplePrompt = PromptTemplate.fromTemplate( "User input: {question}\nCypher query: {query}");const prompt = new FewShotPromptTemplate({ examples: examples.slice(0, 5), examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"],});
console.log( await prompt.format({ question: "How many artists are there?", schema: "foo", }));
You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.Here is the schema informationfoo.Below are a number of examples of questions and their corresponding Cypher queries.User input: How many artists are there?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)User input: Which actors played in the movie Casino?Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.nameUser input: How many movies has Tom Hanks acted in?Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)User input: List all the genres of the movie Schindler's ListCypher query: MATCH (m:Movie {title: 'Schindler\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.nameUser input: Which actors have worked in movies from both the comedy and action genres?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.nameUser input: How many artists are there?Cypher query:
Dynamic few-shot examples[β](#dynamic-few-shot-examples "Direct link to Dynamic few-shot examples")
---------------------------------------------------------------------------------------------------
If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they donβt fit in the modelβs context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.
We can do just this using an ExampleSelector. In this case weβll use a [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones:
import { OpenAIEmbeddings } from "@langchain/openai";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";import { Neo4jVectorStore } from "@langchain/community/vectorstores/neo4j_vector";const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( examples, new OpenAIEmbeddings(), Neo4jVectorStore, { k: 5, inputKeys: ["question"], preDeleteCollection: true, url, username, password, });
await exampleSelector.selectExamples({ question: "how many artists are there?",});
[ { query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)", question: "How many artists are there?" }, { query: "MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)", question: "How many movies has Tom Hanks acted in?" }, { query: "MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE"... 84 more characters, question: "Which actors have worked in movies from both the comedy and action genres?" }, { query: "MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH"... 71 more characters, question: "Which directors have made movies with at least three different actors named 'John'?" }, { query: "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DES"... 9 more characters, question: "Find the actor with the highest number of movies in the database." }]
To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:
const prompt = new FewShotPromptTemplate({ exampleSelector, examplePrompt, prefix: "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n\nHere is the schema information\n{schema}.\n\nBelow are a number of examples of questions and their corresponding Cypher queries.", suffix: "User input: {question}\nCypher query: ", inputVariables: ["question", "schema"],});
console.log( await prompt.format({ question: "how many artists are there?", schema: "foo", }));
You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.Here is the schema informationfoo.Below are a number of examples of questions and their corresponding Cypher queries.User input: How many artists are there?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)User input: How many movies has Tom Hanks acted in?Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)User input: Which actors have worked in movies from both the comedy and action genres?Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.nameUser input: Which directors have made movies with at least three different actors named 'John'?Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.nameUser input: Find the actor with the highest number of movies in the database.Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1User input: how many artists are there?Cypher query:
import { ChatOpenAI } from "@langchain/openai";import { GraphCypherQAChain } from "langchain/chains/graph_qa/cypher";const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0,});const chain = GraphCypherQAChain.fromLLM({ graph, llm, cypherPrompt: prompt,});
await chain.invoke({ query: "How many actors are in the graph?",});
{ result: "There are 967 actors in the graph." }
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Semantic layer over graph database
](/v0.1/docs/use_cases/graph/semantic/)[
Next
Summarization
](/v0.1/docs/use_cases/summarization/)
* [Setup](#setup)
* [Few-shot examples](#few-shot-examples)
* [Dynamic few-shot examples](#dynamic-few-shot-examples)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/agent_simulations/generative_agents/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Use cases](/v0.1/docs/use_cases/)
* [SQL](/v0.1/docs/use_cases/sql/)
* [Chatbots](/v0.1/docs/use_cases/chatbots/)
* [Extraction](/v0.1/docs/use_cases/extraction/)
* [Query Analysis](/v0.1/docs/use_cases/query_analysis/)
* [Q&A with RAG](/v0.1/docs/use_cases/question_answering/)
* [Tool use](/v0.1/docs/use_cases/tool_use/)
* [Interacting with APIs](/v0.1/docs/use_cases/api/)
* [Tabular Question Answering](/v0.1/docs/use_cases/tabular/)
* [Graphs](/v0.1/docs/use_cases/graph/)
* [Summarization](/v0.1/docs/use_cases/summarization/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* [Generative Agents](/v0.1/docs/use_cases/agent_simulations/generative_agents/)
* [Violation of Expectations Chain](/v0.1/docs/use_cases/agent_simulations/violation_of_expectations_chain/)
* [Autonomous Agents](/v0.1/docs/use_cases/autonomous_agents/)
* [Code Understanding](/v0.1/docs/use_cases/code_understanding/)
* [Audio/Video Structured Extraction](/v0.1/docs/use_cases/media/)
* [](/v0.1/)
* [Use cases](/v0.1/docs/use_cases/)
* [Agent Simulations](/v0.1/docs/use_cases/agent_simulations/)
* Generative Agents
Generative Agents
=================
This script implements a generative agent based on the paper [Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442) by Park, et. al.
In it, we leverage a time-weighted Memory object backed by a LangChain retriever. The script below creates two instances of Generative Agents, Tommie and Eve, and runs a simulation of their interaction with their observations. Tommie takes on the role of a person moving to a new town who is looking for a job, and Eve takes on the role of a career counselor.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { TimeWeightedVectorStoreRetriever } from "langchain/retrievers/time_weighted";import { GenerativeAgentMemory, GenerativeAgent,} from "langchain/experimental/generative_agents";const Simulation = async () => { const userName = "USER"; const llm = new OpenAI({ temperature: 0.9, maxTokens: 1500, }); const createNewMemoryRetriever = async () => { // Create a new, demo in-memory vector store retriever unique to the agent. // Better results can be achieved with a more sophisticatd vector store. const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings()); const retriever = new TimeWeightedVectorStoreRetriever({ vectorStore, otherScoreKeys: ["importance"], k: 15, }); return retriever; }; // Initializing Tommie const tommiesMemory: GenerativeAgentMemory = new GenerativeAgentMemory( llm, await createNewMemoryRetriever(), { reflectionThreshold: 8 } ); const tommie: GenerativeAgent = new GenerativeAgent(llm, tommiesMemory, { name: "Tommie", age: 25, traits: "anxious, likes design, talkative", status: "looking for a job", }); console.log("Tommie's first summary:\n", await tommie.getSummary()); /* Tommie's first summary: Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is an individual with no specific core characteristics described. */ // Let's give Tommie some memories! const tommieObservations = [ "Tommie remembers his dog, Bruno, from when he was a kid", "Tommie feels tired from driving so far", "Tommie sees the new home", "The new neighbors have a cat", "The road is noisy at night", "Tommie is hungry", "Tommie tries to get some rest.", ]; for (const observation of tommieObservations) { await tommie.addMemory(observation, new Date()); } // Checking Tommie's summary again after giving him some memories console.log( "Tommie's second summary:\n", await tommie.getSummary({ forceRefresh: true }) ); /* Tommie's second summary: Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie remembers his dog, is tired from driving, sees a new home with neighbors who have a cat, is aware of the noisy road at night, is hungry, and tries to get some rest. */ const interviewAgent = async ( agent: GenerativeAgent, message: string ): Promise<string> => { // Simple wrapper helping the user interact with the agent const newMessage = `${userName} says ${message}`; const response = await agent.generateDialogueResponse(newMessage); return response[1]; }; // Let's have Tommie start going through a day in his life. const observations = [ "Tommie wakes up to the sound of a noisy construction site outside his window.", "Tommie gets out of bed and heads to the kitchen to make himself some coffee.", "Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some.", "Tommie finally finds the filters and makes himself a cup of coffee.", "The coffee tastes bitter, and Tommie regrets not buying a better brand.", "Tommie checks his email and sees that he has no job offers yet.", "Tommie spends some time updating his resume and cover letter.", "Tommie heads out to explore the city and look for job openings.", "Tommie sees a sign for a job fair and decides to attend.", "The line to get in is long, and Tommie has to wait for an hour.", "Tommie meets several potential employers at the job fair but doesn't receive any offers.", "Tommie leaves the job fair feeling disappointed.", "Tommie stops by a local diner to grab some lunch.", "The service is slow, and Tommie has to wait for 30 minutes to get his food.", "Tommie overhears a conversation at the next table about a job opening.", "Tommie asks the diners about the job opening and gets some information about the company.", "Tommie decides to apply for the job and sends his resume and cover letter.", "Tommie continues his search for job openings and drops off his resume at several local businesses.", "Tommie takes a break from his job search to go for a walk in a nearby park.", "A dog approaches and licks Tommie's feet, and he pets it for a few minutes.", "Tommie sees a group of people playing frisbee and decides to join in.", "Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose.", "Tommie goes back to his apartment to rest for a bit.", "A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor.", "Tommie starts to feel frustrated with his job search.", "Tommie calls his best friend to vent about his struggles.", "Tommie's friend offers some words of encouragement and tells him to keep trying.", "Tommie feels slightly better after talking to his friend.", ]; // Let's send Tommie on his way. We'll check in on his summary every few observations to watch him evolve for (let i = 0; i < observations.length; i += 1) { const observation = observations[i]; const [, reaction] = await tommie.generateReaction(observation); console.log("\x1b[32m", observation, "\x1b[0m", reaction); if ((i + 1) % 20 === 0) { console.log("*".repeat(40)); console.log( "\x1b[34m", `After ${ i + 1 } observations, Tommie's summary is:\n${await tommie.getSummary({ forceRefresh: true, })}`, "\x1b[0m" ); console.log("*".repeat(40)); } } /* Tommie wakes up to the sound of a noisy construction site outside his window. Tommie REACT: Tommie groans in frustration and covers his ears with his pillow. Tommie gets out of bed and heads to the kitchen to make himself some coffee. Tommie REACT: Tommie rubs his tired eyes before heading to the kitchen to make himself some coffee. Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some. Tommie REACT: Tommie groans and looks through his moving boxes in search of coffee filters. Tommie finally finds the filters and makes himself a cup of coffee. Tommie REACT: Tommie sighs in relief and prepares himself a much-needed cup of coffee. The coffee tastes bitter, and Tommie regrets not buying a better brand. Tommie REACT: Tommie frowns in disappointment as he takes a sip of the bitter coffee. Tommie checks his email and sees that he has no job offers yet. Tommie REACT: Tommie sighs in disappointment before pushing himself away from the computer with a discouraged look on his face. Tommie spends some time updating his resume and cover letter. Tommie REACT: Tommie takes a deep breath and stares at the computer screen as he updates his resume and cover letter. Tommie heads out to explore the city and look for job openings. Tommie REACT: Tommie takes a deep breath and steps out into the city, ready to find the perfect job opportunity. Tommie sees a sign for a job fair and decides to attend. Tommie REACT: Tommie takes a deep breath and marches towards the job fair, determination in his eyes. The line to get in is long, and Tommie has to wait for an hour. Tommie REACT: Tommie groans in frustration as he notices the long line. Tommie meets several potential employers at the job fair but doesn't receive any offers. Tommie REACT: Tommie's face falls as he listens to each potential employer's explanation as to why they can't hire him. Tommie leaves the job fair feeling disappointed. Tommie REACT: Tommie's face falls as he walks away from the job fair, disappointment evident in his expression. Tommie stops by a local diner to grab some lunch. Tommie REACT: Tommie smiles as he remembers Bruno as he walks into the diner, feeling both a sense of nostalgia and excitement. The service is slow, and Tommie has to wait for 30 minutes to get his food. Tommie REACT: Tommie sighs in frustration and taps his fingers on the table, growing increasingly impatient. Tommie overhears a conversation at the next table about a job opening. Tommie REACT: Tommie leans in closer, eager to hear the conversation. Tommie asks the diners about the job opening and gets some information about the company. Tommie REACT: Tommie eagerly listens to the diner's description of the company, feeling hopeful about the job opportunity. Tommie decides to apply for the job and sends his resume and cover letter. Tommie REACT: Tommie confidently sends in his resume and cover letter, determined to get the job. Tommie continues his search for job openings and drops off his resume at several local businesses. Tommie REACT: Tommie confidently drops his resume off at the various businesses, determined to find a job. Tommie takes a break from his job search to go for a walk in a nearby park. Tommie REACT: Tommie takes a deep breath of the fresh air and smiles in appreciation as he strolls through the park. A dog approaches and licks Tommie's feet, and he pets it for a few minutes. Tommie REACT: Tommie smiles in surprise as he pets the dog, feeling a sense of comfort and nostalgia. **************************************** After 20 observations, Tommie's summary is: Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is a determined and resilient individual who remembers his dog from when he was a kid. Despite feeling tired from driving, he has the courage to explore the city, looking for job openings. He persists in updating his resume and cover letter in the pursuit of finding the perfect job opportunity, even attending job fairs when necessary, and is disappointed when he's not offered a job. **************************************** Tommie sees a group of people playing frisbee and decides to join in. Tommie REACT: Tommie smiles and approaches the group, eager to take part in the game. Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose. Tommie REACT: Tommie grimaces in pain and raises his hand to his nose, checking to see if it's bleeding. Tommie goes back to his apartment to rest for a bit. Tommie REACT: Tommie yawns and trudges back to his apartment, feeling exhausted from his busy day. A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor. Tommie REACT: Tommie shakes his head in annoyance as he surveys the mess. Tommie starts to feel frustrated with his job search. Tommie REACT: Tommie sighs in frustration and shakes his head, feeling discouraged from his lack of progress. Tommie calls his best friend to vent about his struggles. Tommie REACT: Tommie runs his hands through his hair and sighs heavily, overwhelmed by his job search. Tommie's friend offers some words of encouragement and tells him to keep trying. Tommie REACT: Tommie gives his friend a grateful smile, feeling comforted by the words of encouragement. Tommie feels slightly better after talking to his friend. Tommie REACT: Tommie gives a small smile of appreciation to his friend, feeling grateful for the words of encouragement. */ // Interview after the day console.log( await interviewAgent(tommie, "Tell me about how your day has been going") ); /* Tommie said "My day has been pretty hectic. I've been driving around looking for job openings, attending job fairs, and updating my resume and cover letter. It's been really exhausting, but I'm determined to find the perfect job for me." */ console.log(await interviewAgent(tommie, "How do you feel about coffee?")); /* Tommie said "I actually love coffee - it's one of my favorite things. I try to drink it every day, especially when I'm stressed from job searching." */ console.log( await interviewAgent(tommie, "Tell me about your childhood dog!") ); /* Tommie said "My childhood dog was named Bruno. He was an adorable black Labrador Retriever who was always full of energy. Every time I came home he'd be so excited to see me, it was like he never stopped smiling. He was always ready for adventure and he was always my shadow. I miss him every day." */ console.log( "Tommie's second summary:\n", await tommie.getSummary({ forceRefresh: true }) ); /* Tommie's second summary: Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is a hardworking individual who is looking for new opportunities. Despite feeling tired, he is determined to find the perfect job. He remembers his dog from when he was a kid, is hungry, and is frustrated at times. He shows resilience when searching for his coffee filters, disappointment when checking his email and finding no job offers, and determination when attending the job fair. */ // Letβs add a second character to have a conversation with Tommie. Feel free to configure different traits. const evesMemory: GenerativeAgentMemory = new GenerativeAgentMemory( llm, await createNewMemoryRetriever(), { verbose: false, reflectionThreshold: 5, } ); const eve: GenerativeAgent = new GenerativeAgent(llm, evesMemory, { name: "Eve", age: 34, traits: "curious, helpful", status: "just started her new job as a career counselor last week and received her first assignment, a client named Tommie.", // dailySummaries: [ // "Eve started her new job as a career counselor last week and received her first assignment, a client named Tommie." // ] }); const eveObservations = [ "Eve overhears her colleague say something about a new client being hard to work with", "Eve wakes up and hears the alarm", "Eve eats a boal of porridge", "Eve helps a coworker on a task", "Eve plays tennis with her friend Xu before going to work", "Eve overhears her colleague say something about Tommie being hard to work with", ]; for (const observation of eveObservations) { await eve.addMemory(observation, new Date()); } const eveInitialSummary: string = await eve.getSummary({ forceRefresh: true, }); console.log("Eve's initial summary\n", eveInitialSummary); /* Eve's initial summary Name: Eve (age: 34) Innate traits: curious, helpful Eve is an attentive listener, helpful colleague, and sociable friend who enjoys playing tennis. */ // Letβs βInterviewβ Eve before she speaks with Tommie. console.log(await interviewAgent(eve, "How are you feeling about today?")); /* Eve said "I'm feeling a bit anxious about meeting my new client, but I'm sure it will be fine! How about you?". */ console.log(await interviewAgent(eve, "What do you know about Tommie?")); /* Eve said "I know that Tommie is a recent college graduate who's been struggling to find a job. I'm looking forward to figuring out how I can help him move forward." */ console.log( await interviewAgent( eve, "Tommie is looking to find a job. What are are some things you'd like to ask him?" ) ); /* Eve said: "I'd really like to get to know more about Tommie's professional background and experience, and why he is looking for a job. And I'd also like to know more about his strengths and passions and what kind of work he would be best suited for. That way I can help him find the right job to fit his needs." */ // Generative agents are much more complex when they interact with a virtual environment or with each other. // Below, we run a simple conversation between Tommie and Eve. const runConversation = async ( agents: GenerativeAgent[], initialObservation: string ): Promise<void> => { // Starts the conversation bewteen two agents let [, observation] = await agents[1].generateReaction(initialObservation); console.log("Initial reply:", observation); // eslint-disable-next-line no-constant-condition while (true) { let breakDialogue = false; for (const agent of agents) { const [stayInDialogue, agentObservation] = await agent.generateDialogueResponse(observation); console.log("Next reply:", agentObservation); observation = agentObservation; if (!stayInDialogue) { breakDialogue = true; } } if (breakDialogue) { break; } } }; const agents: GenerativeAgent[] = [tommie, eve]; await runConversation( agents, "Tommie said: Hi, Eve. Thanks for agreeing to meet with me today. I have a bunch of questions and am not sure where to start. Maybe you could first share about your experience?" ); /* Initial reply: Eve said "Of course, Tommie. I'd be happy to share about my experience. What specific questions do you have?" Next reply: Tommie said "Thank you, Eve. I'm curious about what strategies you used in your own job search. Did you have any specific tactics that helped you stand out to employers?" Next reply: Eve said "Sure, Tommie. I found that networking and reaching out to professionals in my field was really helpful. I also made sure to tailor my resume and cover letter to each job I applied to. Do you have any specific questions about those strategies?" Next reply: Tommie said "Thank you, Eve. That's really helpful advice. Did you have any specific ways of networking that worked well for you?" Next reply: Eve said "Sure, Tommie. I found that attending industry events and connecting with professionals on LinkedIn were both great ways to network. Do you have any specific questions about those tactics?" Next reply: Tommie said "That's really helpful, thank you for sharing. Did you find that you were able to make meaningful connections through LinkedIn?" Next reply: Eve said "Yes, definitely. I was able to connect with several professionals in my field and even landed a job through a LinkedIn connection. Have you had any luck with networking on LinkedIn?" Next reply: Tommie said "That's really impressive! I haven't had much luck yet, but I'll definitely keep trying. Thank you for the advice, Eve." Next reply: Eve said "Glad I could help, Tommie. Is there anything else you want to know?" Next reply: Tommie said "Thanks again, Eve. I really appreciate your advice and I'll definitely put it into practice. Have a great day!" Next reply: Eve said "You're welcome, Tommie! Don't hesitate to reach out if you have any more questions. Have a great day too!" */ // Since the generative agents retain their memories from the day, we can ask them about their plans, conversations, and other memories. const tommieSummary: string = await tommie.getSummary({ forceRefresh: true, }); console.log("Tommie's third and final summary\n", tommieSummary); /* Tommie's third and final summary Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is a determined individual, who demonstrates resilience in the face of disappointment. He is also a nostalgic person, remembering fondly his childhood pet, Bruno. He is resourceful, searching through his moving boxes to find what he needs, and takes initiative to attend job fairs to look for job openings. */ const eveSummary: string = await eve.getSummary({ forceRefresh: true }); console.log("Eve's final summary\n", eveSummary); /* Eve's final summary Name: Eve (age: 34) Innate traits: curious, helpful Eve is a helpful and encouraging colleague who actively listens to her colleagues and offers advice on how to move forward. She is willing to take time to understand her clients and their goals, and is committed to helping them succeed. */ const interviewOne: string = await interviewAgent( tommie, "How was your conversation with Eve?" ); console.log("USER: How was your conversation with Eve?\n"); console.log(interviewOne); /* Tommie said "It was great. She was really helpful and knowledgeable. I'm thankful that she took the time to answer all my questions." */ const interviewTwo: string = await interviewAgent( eve, "How was your conversation with Tommie?" ); console.log("USER: How was your conversation with Tommie?\n"); console.log(interviewTwo); /* Eve said "The conversation went very well. We discussed his goals and career aspirations, what kind of job he is looking for, and his experience and qualifications. I'm confident I can help him find the right job." */ const interviewThree: string = await interviewAgent( eve, "What do you wish you would have said to Tommie?" ); console.log("USER: What do you wish you would have said to Tommie?\n"); console.log(interviewThree); /* Eve said "It's ok if you don't have all the answers yet. Let's take some time to learn more about your experience and qualifications, so I can help you find a job that fits your goals." */ return { tommieFinalSummary: tommieSummary, eveFinalSummary: eveSummary, interviewOne, interviewTwo, interviewThree, };};const runSimulation = async () => { try { await Simulation(); } catch (error) { console.log("error running simulation:", error); throw error; }};await runSimulation();
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [TimeWeightedVectorStoreRetriever](https://api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html) from `langchain/retrievers/time_weighted`
* [GenerativeAgentMemory](https://api.js.langchain.com/classes/langchain_experimental_generative_agents.GenerativeAgentMemory.html) from `langchain/experimental/generative_agents`
* [GenerativeAgent](https://api.js.langchain.com/classes/langchain_experimental_generative_agents.GenerativeAgent.html) from `langchain/experimental/generative_agents`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Agent Simulations
](/v0.1/docs/use_cases/agent_simulations/)[
Next
Violation of Expectations Chain
](/v0.1/docs/use_cases/agent_simulations/violation_of_expectations_chain/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/supabase/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Supabase
On this page
Supabase
========
Langchain supports using Supabase Postgres database as a vector store, using the `pgvector` postgres extension. Refer to the [Supabase blog post](https://supabase.com/blog/openai-embeddings-postgres-vector) for more information.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
### Install the library with[β](#install-the-library-with "Direct link to Install the library with")
* npm
* Yarn
* pnpm
npm install -S @supabase/supabase-js
yarn add @supabase/supabase-js
pnpm add @supabase/supabase-js
### Create a table and search function in your database[β](#create-a-table-and-search-function-in-your-database "Direct link to Create a table and search function in your database")
Run this in your database:
-- Enable the pgvector extension to work with embedding vectorscreate extension vector;-- Create a table to store your documentscreate table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed);-- Create a function to search for documentscreate function match_documents ( query_embedding vector(1536), match_count int DEFAULT null, filter jsonb DEFAULT '{}') returns table ( id bigint, content text, metadata jsonb, embedding jsonb, similarity float)language plpgsqlas $$#variable_conflict use_columnbegin return query select id, content, metadata, (embedding::text)::jsonb as embedding, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count;end;$$;
Usage[β](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Standard Usage[β](#standard-usage "Direct link to Standard Usage")
The below example shows how to perform a basic similarity search with Supabase:
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.similaritySearch("Hello world", 1); console.log(resultOne);};
#### API Reference:
* [SupabaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Metadata Filtering[β](#metadata-filtering "Direct link to Metadata Filtering")
Given the above `match_documents` Postgres function, you can also pass a filter parameter to only documents with a specific metadata field value. This filter parameter is a JSON object, and the `match_documents` function will use the Postgres JSONB Containment operator `@>` to filter documents by the metadata field values you specify. See details on the [Postgres JSONB Containment operator](https://www.postgresql.org/docs/current/datatype-json.html#JSON-CONTAINMENT) for more information.
**Note:** If you've previously been using `SupabaseVectorStore`, you may need to drop and recreate the `match_documents` function per the updated SQL above to use this functionality.
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Hello world", "Hello world"], [{ user_id: 2 }, { user_id: 1 }, { user_id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const result = await vectorStore.similaritySearch("Hello world", 1, { user_id: 3, }); console.log(result);};
#### API Reference:
* [SupabaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Metadata Query Builder Filtering[β](#metadata-query-builder-filtering "Direct link to Metadata Query Builder Filtering")
You can also use query builder-style filtering similar to how [the Supabase JavaScript library works](https://supabase.com/docs/reference/javascript/using-filters) instead of passing an object. Note that since most of the filter properties are in the metadata column, you need to use arrow operators (`->` for integer or `->>` for text) as defined in [Postgrest API documentation](https://postgrest.org/en/stable/references/api/tables_views.html?highlight=operators#json-columns) and specify the data type of the property (e.g. the column should look something like `metadata->some_int_value::int`).
import { SupabaseFilterRPCCall, SupabaseVectorStore,} from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to expand upon the notion of quantum fluff, a theorectical concept where subatomic particles coalesce to form transient multidimensional spaces. Yet, this abstraction holds no real-world application or comprehensible meaning, reflecting a cosmic puzzle.", metadata: { b: 1, c: 10, stuff: "right" }, }, { pageContent: "This is a long text, but it actually means something because vector database does not understand Lorem Ipsum. So I would need to proceed by discussing the echo of virtual tweets in the binary corridors of the digital universe. Each tweet, like a pixelated canary, hums in an unseen frequency, a fascinatingly perplexing phenomenon that, while conjuring vivid imagery, lacks any concrete implication or real-world relevance, portraying a paradox of multidimensional spaces in the age of cyber folklore.", metadata: { b: 2, c: 9, stuff: "right" }, }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, { pageContent: "hi", metadata: { b: 2, c: 8, stuff: "right" } }, { pageContent: "bye", metadata: { b: 3, c: 7, stuff: "right" } }, { pageContent: "what's this", metadata: { b: 4, c: 6, stuff: "right" } }, ]; // Also supports an additional {ids: []} parameter for upsertion await store.addDocuments(docs); const funcFilterA: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .textSearch("content", `'multidimensional' & 'spaces'`, { config: "english", }); const resultA = await store.similaritySearch("quantum", 4, funcFilterA); const funcFilterB: SupabaseFilterRPCCall = (rpc) => rpc .filter("metadata->b::int", "lt", 3) .filter("metadata->c::int", "gt", 7) .filter("metadata->>stuff", "eq", "right"); const resultB = await store.similaritySearch("hello", 2, funcFilterB); console.log(resultA, resultB);};
#### API Reference:
* [SupabaseFilterRPCCall](https://api.js.langchain.com/types/langchain_community_vectorstores_supabase.SupabaseFilterRPCCall.html) from `@langchain/community/vectorstores/supabase`
* [SupabaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Maximal marginal relevance[β](#maximal-marginal-relevance "Direct link to Maximal marginal relevance")
You can use maximal marginal relevance search, which optimizes for similarity to the query AND diversity.
**Note:** If you've previously been using `SupabaseVectorStore`, you may need to drop and recreate the `match_documents` function per the updated SQL above to use this functionality.
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const vectorStore = await SupabaseVectorStore.fromTexts( ["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { client, tableName: "documents", queryName: "match_documents", } ); const resultOne = await vectorStore.maxMarginalRelevanceSearch( "Hello world", { k: 1 } ); console.log(resultOne);};
#### API Reference:
* [SupabaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Document deletion[β](#document-deletion "Direct link to Document deletion")
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";import { OpenAIEmbeddings } from "@langchain/openai";import { createClient } from "@supabase/supabase-js";// First, follow set-up instructions at// https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabaseconst privateKey = process.env.SUPABASE_PRIVATE_KEY;if (!privateKey) throw new Error(`Expected env var SUPABASE_PRIVATE_KEY`);const url = process.env.SUPABASE_URL;if (!url) throw new Error(`Expected env var SUPABASE_URL`);export const run = async () => { const client = createClient(url, privateKey); const embeddings = new OpenAIEmbeddings(); const store = new SupabaseVectorStore(embeddings, { client, tableName: "documents", }); const docs = [ { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ]; // Also takes an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); const resultA = await store.similaritySearch("hello", 2); console.log(resultA); /* [ Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "right" } }, Document { pageContent: "hello", metadata: { b: 1, c: 9, stuff: "wrong" } }, ] */ await store.delete({ ids }); const resultB = await store.similaritySearch("hello", 2); console.log(resultB); /* [] */};
#### API Reference:
* [SupabaseVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_supabase.SupabaseVectorStore.html) from `@langchain/community/vectorstores/supabase`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
SingleStore
](/v0.1/docs/integrations/vectorstores/singlestore/)[
Next
Tigris
](/v0.1/docs/integrations/vectorstores/tigris/)
* [Setup](#setup)
* [Install the library with](#install-the-library-with)
* [Create a table and search function in your database](#create-a-table-and-search-function-in-your-database)
* [Usage](#usage)
* [Standard Usage](#standard-usage)
* [Metadata Filtering](#metadata-filtering)
* [Metadata Query Builder Filtering](#metadata-query-builder-filtering)
* [Maximal marginal relevance](#maximal-marginal-relevance)
* [Document deletion](#document-deletion)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/document_loaders/creating_documents/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Creating documents](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)
* [CSV](/v0.1/docs/modules/data_connection/document_loaders/csv/)
* [Custom document loaders](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [File Directory](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [JSON](/v0.1/docs/modules/data_connection/document_loaders/json/)
* [PDF](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* Creating documents
Creating documents
==================
A document at its core is fairly simple. It consists of a piece of text and optional metadata. The piece of text is what we interact with the language model, while the optional metadata is useful for keeping track of metadata about the document (such as the source).
interface Document { pageContent: string; metadata: Record<string, any>;}
You can create a document object rather easily in LangChain with:
import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo" });
You can create one with metadata with:
import { Document } from "langchain/document";const doc = new Document({ pageContent: "foo", metadata: { source: "1" } });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Document loaders
](/v0.1/docs/modules/data_connection/document_loaders/)[
Next
CSV
](/v0.1/docs/modules/data_connection/document_loaders/csv/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/document_loaders/custom/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Creating documents](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)
* [CSV](/v0.1/docs/modules/data_connection/document_loaders/csv/)
* [Custom document loaders](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [File Directory](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [JSON](/v0.1/docs/modules/data_connection/document_loaders/json/)
* [PDF](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* Custom document loaders
On this page
Custom document loaders
=======================
If you want to implement your own Document Loader, you have a few options.
### Subclassing `BaseDocumentLoader`[β](#subclassing-basedocumentloader "Direct link to subclassing-basedocumentloader")
You can extend the `BaseDocumentLoader` class directly. The `BaseDocumentLoader` class provides a few convenience methods for loading documents from a variety of sources.
abstract class BaseDocumentLoader implements DocumentLoader { abstract load(): Promise<Document[]>;}
### Subclassing `TextLoader`[β](#subclassing-textloader "Direct link to subclassing-textloader")
If you want to load documents from a text file, you can extend the `TextLoader` class. The `TextLoader` class takes care of reading the file, so all you have to do is implement a parse method.
abstract class TextLoader extends BaseDocumentLoader { abstract parse(raw: string): Promise<string[]>;}
### Subclassing `BufferLoader`[β](#subclassing-bufferloader "Direct link to subclassing-bufferloader")
If you want to load documents from a binary file, you can extend the `BufferLoader` class. The `BufferLoader` class takes care of reading the file, so all you have to do is implement a parse method.
abstract class BufferLoader extends BaseDocumentLoader { abstract parse( raw: Buffer, metadata: Document["metadata"] ): Promise<Document[]>;}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
CSV
](/v0.1/docs/modules/data_connection/document_loaders/csv/)[
Next
File Directory
](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [Subclassing `BaseDocumentLoader`](#subclassing-basedocumentloader)
* [Subclassing `TextLoader`](#subclassing-textloader)
* [Subclassing `BufferLoader`](#subclassing-bufferloader)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/document_loaders/json/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Creating documents](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)
* [CSV](/v0.1/docs/modules/data_connection/document_loaders/csv/)
* [Custom document loaders](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [File Directory](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [JSON](/v0.1/docs/modules/data_connection/document_loaders/json/)
* [PDF](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* JSON
On this page
JSON
====
> [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attributeβvalue pairs and arrays (or other serializable values).
> [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value.
The JSON loader uses [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target.
### No JSON pointer example[β](#no-json-pointer-example "Direct link to No JSON pointer example")
The most simple way of using it is to specify no JSON pointer. The loader will load all strings it finds in the JSON object.
Example JSON file:
{ "texts": ["This is a sentence.", "This is another sentence."]}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/
### Using JSON pointer example[β](#using-json-pointer-example "Direct link to Using JSON pointer example")
You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.
In this example, we want to only extract information from "from" and "surname" entries.
{ "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { pageContent: 'LinkedIn Job', metadata: { source: './src/json/example.json', line: 1 } }, Document { pageContent: 'LinkedIn Job2', metadata: { source: './src/json/example.json', line: 2 } }, Document { pageContent: 'bob', metadata: { source: './src/json/example.json', line: 3 } }]**/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
File Directory
](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)[
Next
PDF
](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [No JSON pointer example](#no-json-pointer-example)
* [Using JSON pointer example](#using-json-pointer-example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/document_loaders/pdf/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Creating documents](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)
* [CSV](/v0.1/docs/modules/data_connection/document_loaders/csv/)
* [Custom document loaders](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [File Directory](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [JSON](/v0.1/docs/modules/data_connection/document_loaders/json/)
* [PDF](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* PDF
On this page
PDF
===
> [Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This covers how to load `PDF` documents into the Document format that we use downstream.
By default, one document will be created for each page in the PDF file. You can change this behavior by setting the `splitPages` option to `false`.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install pdf-parse
yarn add pdf-parse
pnpm add pdf-parse
Usage, one document per page[β](#usage-one-document-per-page "Direct link to Usage, one document per page")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";// Or, in web environments:// import { WebPDFLoader } from "langchain/document_loaders/web/pdf";// const blob = new Blob(); // e.g. from a file input// const loader = new WebPDFLoader(blob);const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();
Usage, one document per file[β](#usage-one-document-per-file "Direct link to Usage, one document per file")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();
Usage, custom `pdfjs` build[β](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build")
---------------------------------------------------------------------------------------------------
By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object.
In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build.
* npm
* Yarn
* pnpm
npm install pdfjs-dist
yarn add pdfjs-dist
pnpm add pdfjs-dist
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
Eliminating extra spaces[β](#eliminating-extra-spaces "Direct link to Eliminating extra spaces")
------------------------------------------------------------------------------------------------
PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this:
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { parsedItemSeparator: "",});const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
JSON
](/v0.1/docs/modules/data_connection/document_loaders/json/)[
Next
Text Splitters
](/v0.1/docs/modules/data_connection/document_transformers/)
* [Setup](#setup)
* [Usage, one document per page](#usage-one-document-per-page)
* [Usage, one document per file](#usage-one-document-per-file)
* [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build)
* [Eliminating extra spaces](#eliminating-extra-spaces)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/document_loaders/csv/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Creating documents](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)
* [CSV](/v0.1/docs/modules/data_connection/document_loaders/csv/)
* [Custom document loaders](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [File Directory](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [JSON](/v0.1/docs/modules/data_connection/document_loaders/json/)
* [PDF](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* CSV
On this page
CSV
===
> A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.
Load CSV data with a single row per document.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install d3-dsv@2
yarn add d3-dsv@2
pnpm add d3-dsv@2
Usage, extracting all columns[β](#usage-extracting-all-columns "Direct link to Usage, extracting all columns")
--------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence.", },]*/
Usage, extracting a single column[β](#usage-extracting-a-single-column "Direct link to Usage, extracting a single column")
--------------------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence.", },]*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Creating documents
](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)[
Next
Custom document loaders
](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [Setup](#setup)
* [Usage, extracting all columns](#usage-extracting-all-columns)
* [Usage, extracting a single column](#usage-extracting-a-single-column)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/document_loaders/file_directory/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Creating documents](/v0.1/docs/modules/data_connection/document_loaders/creating_documents/)
* [CSV](/v0.1/docs/modules/data_connection/document_loaders/csv/)
* [Custom document loaders](/v0.1/docs/modules/data_connection/document_loaders/custom/)
* [File Directory](/v0.1/docs/modules/data_connection/document_loaders/file_directory/)
* [JSON](/v0.1/docs/modules/data_connection/document_loaders/json/)
* [PDF](/v0.1/docs/modules/data_connection/document_loaders/pdf/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* File Directory
File Directory
==============
This covers how to load all documents in a directory.
The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
Example folder:
src/document_loaders/example_data/example/βββ example.jsonβββ example.jsonlβββ example.txtβββ example.csv
Example code:
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Custom document loaders
](/v0.1/docs/modules/data_connection/document_loaders/custom/)[
Next
JSON
](/v0.1/docs/modules/data_connection/document_loaders/json/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/vectorstores/custom/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Custom vectorstores](/v0.1/docs/modules/data_connection/vectorstores/custom/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* Custom vectorstores
Custom vectorstores
===================
If you want to interact with a vectorstore that is not already present as an [integration](/v0.1/docs/integrations/vectorstores/), you can extend the [`VectorStore` class](https://api.js.langchain.com/classes/langchain_core_vectorstores.VectorStore.html).
This involves overriding a few methods:
* `FilterType`, if your vectorstore supports filtering by metadata, you should declare the type of the filter required.
* `addDocuments`, which embeds and adds LangChain documents to storage. This is a convenience method that should generally use the `embeddings` passed into the constructor to embed the document content, then call `addVectors`.
* `addVectors`, which is responsible for saving embedded vectors, document content, and metadata to the backing store.
* `similaritySearchVectorWithScore`, which searches for vectors within the store by similarity to an input vector, and returns a tuple of the most relevant documents and a score.
* `_vectorstoreType`, which returns an identifying string for the class. Used for tracing and type-checking.
* `fromTexts` and `fromDocuments`, which are convenience static methods for initializing a vectorstore from data.
There are a few optional methods too:
* `delete`, which deletes vectors and their associated metadata from the backing store based on arbitrary parameters.
* `maxMarginalRelevanceSearch`, an alternative search mode that increases the number of retrieved vectors, reranks them to optimize for diversity, then returns top results. This can help reduce the amount of redundancy in returned results.
A few notes:
* Different databases provide varying levels of support for storing raw content/extra metadata fields. Some higher level retrieval abstractions like [multi-vector retrieval](/v0.1/docs/modules/data_connection/retrievers/multi-vector-retriever/) in LangChain rely on the ability to set arbitrary metadata on stored vectors.
* Generally, search type arguments that are not used directly to filter returned vectors by associated metadata should be passed into the constructor.
Here is an example of an in-memory vectorstore with no persistence that uses cosine distance:
import { VectorStore } from "@langchain/core/vectorstores";import type { EmbeddingsInterface } from "@langchain/core/embeddings";import { Document } from "@langchain/core/documents";import { similarity as ml_distance_similarity } from "ml-distance";interface InMemoryVector { content: string; embedding: number[]; metadata: Record<string, any>;}export interface CustomVectorStoreArgs {}export class CustomVectorStore extends VectorStore { declare FilterType: (doc: Document) => boolean; memoryVectors: InMemoryVector[] = []; _vectorstoreType(): string { return "custom"; } constructor( embeddings: EmbeddingsInterface, fields: CustomVectorStoreArgs = {} ) { super(embeddings, fields); } async addDocuments(documents: Document[]): Promise<void> { const texts = documents.map(({ pageContent }) => pageContent); return this.addVectors( await this.embeddings.embedDocuments(texts), documents ); } async addVectors(vectors: number[][], documents: Document[]): Promise<void> { const memoryVectors = vectors.map((embedding, idx) => ({ content: documents[idx].pageContent, embedding, metadata: documents[idx].metadata, })); this.memoryVectors = this.memoryVectors.concat(memoryVectors); } async similaritySearchVectorWithScore( query: number[], k: number, filter?: this["FilterType"] ): Promise<[Document, number][]> { const filterFunction = (memoryVector: InMemoryVector) => { if (!filter) { return true; } const doc = new Document({ metadata: memoryVector.metadata, pageContent: memoryVector.content, }); return filter(doc); }; const filteredMemoryVectors = this.memoryVectors.filter(filterFunction); const searches = filteredMemoryVectors .map((vector, index) => ({ similarity: ml_distance_similarity.cosine(query, vector.embedding), index, })) .sort((a, b) => (a.similarity > b.similarity ? -1 : 0)) .slice(0, k); const result: [Document, number][] = searches.map((search) => [ new Document({ metadata: filteredMemoryVectors[search.index].metadata, pageContent: filteredMemoryVectors[search.index].content, }), search.similarity, ]); return result; } static async fromTexts( texts: string[], metadatas: object[] | object, embeddings: EmbeddingsInterface, dbConfig?: CustomVectorStoreArgs ): Promise<CustomVectorStore> { const docs: Document[] = []; for (let i = 0; i < texts.length; i += 1) { const metadata = Array.isArray(metadatas) ? metadatas[i] : metadatas; const newDoc = new Document({ pageContent: texts[i], metadata, }); docs.push(newDoc); } return this.fromDocuments(docs, embeddings, dbConfig); } static async fromDocuments( docs: Document[], embeddings: EmbeddingsInterface, dbConfig?: CustomVectorStoreArgs ): Promise<CustomVectorStore> { const instance = new this(embeddings, dbConfig); await instance.addDocuments(docs); return instance; }}
Then, we can call this vectorstore directly:
import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";const vectorstore = new CustomVectorStore(new OpenAIEmbeddings());await vectorstore.addDocuments([ new Document({ pageContent: "Mitochondria is the powerhouse of the cell", metadata: { id: 1 }, }), new Document({ pageContent: "Buildings are made of brick", metadata: { id: 2 }, }),]);await vectorstore.similaritySearch("What is the powerhouse of the cell?");
[ Document { pageContent: "Mitochondria is the powerhouse of the cell", metadata: { id: 1 } }, Document { pageContent: "Buildings are made of brick", metadata: { id: 2 } }]
Or, we can interact with the vectorstore as a retriever:
const retriever = vectorstore.asRetriever();await retriever.invoke("What is the powerhouse of the cell?");
[ Document { pageContent: "Mitochondria is the powerhouse of the cell", metadata: { id: 1 } }, Document { pageContent: "Buildings are made of brick", metadata: { id: 2 } }]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Vector stores
](/v0.1/docs/modules/data_connection/vectorstores/)[
Next
Indexing
](/v0.1/docs/modules/data_connection/indexing/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/faiss/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Faiss
On this page
Faiss
=====
Compatibility
Only available on Node.js.
[Faiss](https://github.com/facebookresearch/faiss) is a library for efficient similarity search and clustering of dense vectors.
Langchainjs supports using Faiss as a vectorstore that can be saved to file. It also provides the ability to read the saved file from [Python's implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Install the [faiss-node](https://github.com/ewfian/faiss-node), which is a Node.js bindings for [Faiss](https://github.com/facebookresearch/faiss).
* npm
* Yarn
* pnpm
npm install -S faiss-node
yarn add faiss-node
pnpm add faiss-node
To enable the ability to read the saved file from [Python's implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading), the [pickleparser](https://github.com/ewfian/pickleparser) also needs to install.
* npm
* Yarn
* pnpm
npm install -S pickleparser
yarn add pickleparser
pnpm add pickleparser
Usage[β](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
### Create a new index from texts[β](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};
#### API Reference:
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[β](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await FaissStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Deleting vectors[β](#deleting-vectors "Direct link to Deleting vectors")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";import { Document } from "@langchain/core/documents";const vectorStore = new FaissStore(new OpenAIEmbeddings(), {});const ids = ["2", "1", "4"];const idsReturned = await vectorStore.addDocuments( [ new Document({ pageContent: "my world", metadata: { tag: 2 }, }), new Document({ pageContent: "our world", metadata: { tag: 1 }, }), new Document({ pageContent: "your world", metadata: { tag: 4 }, }), ], { ids, });console.log(idsReturned);/* [ '2', '1', '4' ]*/const docs = await vectorStore.similaritySearch("my world", 3);console.log(docs);/*[ Document { pageContent: 'my world', metadata: { tag: 2 } }, Document { pageContent: 'your world', metadata: { tag: 4 } }, Document { pageContent: 'our world', metadata: { tag: 1 } }]*/await vectorStore.delete({ ids: [ids[0], ids[1]] });const docs2 = await vectorStore.similaritySearch("my world", 3);console.log(docs2);/*[ Document { pageContent: 'your world', metadata: { tag: 4 } } ]*/
#### API Reference:
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
### Merging indexes and creating new index from another instance[β](#merging-indexes-and-creating-new-index-from-another-instance "Direct link to Merging indexes and creating new index from another instance")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { // Create an initial vector store const vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); // Create another vector store from texts const vectorStore2 = await FaissStore.fromTexts( ["Some text"], [{ id: 1 }], new OpenAIEmbeddings() ); // merge the first vector store into vectorStore2 await vectorStore2.mergeFrom(vectorStore); const resultOne = await vectorStore2.similaritySearch("hello world", 1); console.log(resultOne); // You can also create a new vector store from another FaissStore index const vectorStore3 = await FaissStore.fromIndex( vectorStore2, new OpenAIEmbeddings() ); const resultTwo = await vectorStore3.similaritySearch("Bye bye", 1); console.log(resultTwo);};
#### API Reference:
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Save an index to file and load it again[β](#save-an-index-to-file-and-load-it-again "Direct link to Save an index to file and load it again")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await FaissStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await FaissStore.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Load the saved file from [Python's implementation](https://python.langchain.com/docs/integrations/vectorstores/faiss#saving-and-loading)[β](#load-the-saved-file-from-pythons-implementation "Direct link to load-the-saved-file-from-pythons-implementation")
import { FaissStore } from "@langchain/community/vectorstores/faiss";import { OpenAIEmbeddings } from "@langchain/openai";// The directory of data saved from Pythonconst directory = "your/directory/here";// Load the vector store from the directoryconst loadedVectorStore = await FaissStore.loadFromPython( directory, new OpenAIEmbeddings());// Search for the most similar documentconst result = await loadedVectorStore.similaritySearch("test", 2);console.log("result", result);
#### API Reference:
* [FaissStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_faiss.FaissStore.html) from `@langchain/community/vectorstores/faiss`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Elasticsearch
](/v0.1/docs/integrations/vectorstores/elasticsearch/)[
Next
Google Vertex AI Matching Engine
](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Deleting vectors](#deleting-vectors)
* [Merging indexes and creating new index from another instance](#merging-indexes-and-creating-new-index-from-another-instance)
* [Save an index to file and load it again](#save-an-index-to-file-and-load-it-again)
* [Load the saved file from Python's implementation](#load-the-saved-file-from-pythons-implementation)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/hnswlib/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* HNSWLib
On this page
HNSWLib
=======
Compatibility
Only available on Node.js.
HNSWLib is an in-memory vectorstore that can be saved to a file. It uses [HNSWLib](https://github.com/nmslib/hnswlib).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
caution
**On Windows**, you might need to install [Visual Studio](https://visualstudio.microsoft.com/downloads/) first in order to properly build the `hnswlib-node` package.
You can install it with
* npm
* Yarn
* pnpm
npm install hnswlib-node
yarn add hnswlib-node
pnpm add hnswlib-node
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[β](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[β](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[β](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());// Search for the most similar documentconst result = await vectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Save an index to a file and load it again[β](#save-an-index-to-a-file-and-load-it-again "Direct link to Save an index to a file and load it again")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an exampleconst vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Filter documents[β](#filter-documents "Direct link to Filter documents")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await HNSWLib.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const result = await vectorStore.similaritySearch( "hello world", 10, (document) => document.metadata.id === 3);// only "hello nice world" will be returnedconsole.log(result);
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Delete index[β](#delete-index "Direct link to Delete index")
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings } from "@langchain/openai";// Save the vector store to a directoryconst directory = "your/directory/here";// Load the vector store from the same directoryconst loadedVectorStore = await HNSWLib.load(directory, new OpenAIEmbeddings());await loadedVectorStore.delete({ directory });
#### API Reference:
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
SAP HANA Cloud Vector Engine
](/v0.1/docs/integrations/vectorstores/hanavector/)[
Next
LanceDB
](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Save an index to a file and load it again](#save-an-index-to-a-file-and-load-it-again)
* [Filter documents](#filter-documents)
* [Delete index](#delete-index)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/lancedb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* LanceDB
On this page
LanceDB
=======
LanceDB is an embedded vector database for AI applications. It is open source and distributed with an Apache-2.0 license.
LanceDB datasets are persisted to disk and can be shared between Node.js and Python.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Install the [LanceDB](https://github.com/lancedb/lancedb) [Node.js bindings](https://www.npmjs.com/package/vectordb):
* npm
* Yarn
* pnpm
npm install -S vectordb
yarn add vectordb
pnpm add vectordb
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[β](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[β](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
import { LanceDB } from "@langchain/community/vectorstores/lancedb";import { OpenAIEmbeddings } from "@langchain/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", id: 1 }, ]); const vectorStore = await LanceDB.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'hello nice world', metadata: { id: 3 } } ]};
#### API Reference:
* [LanceDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_lancedb.LanceDB.html) from `@langchain/community/vectorstores/lancedb`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[β](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
import { LanceDB } from "@langchain/community/vectorstores/lancedb";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import fs from "node:fs/promises";import path from "node:path";import os from "node:os";import { connect } from "vectordb";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); const table = await db.createTable("vectors", [ { vector: Array(1536), text: "sample", source: "a" }, ]); const vectorStore = await LanceDB.fromDocuments( docs, new OpenAIEmbeddings(), { table } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ // Document { // pageContent: 'Foo\nBar\nBaz\n\n', // metadata: { source: 'src/document_loaders/example_data/example.txt' } // } // ]};
#### API Reference:
* [LanceDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_lancedb.LanceDB.html) from `@langchain/community/vectorstores/lancedb`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Open an existing dataset[β](#open-an-existing-dataset "Direct link to Open an existing dataset")
import { LanceDB } from "@langchain/community/vectorstores/lancedb";import { OpenAIEmbeddings } from "@langchain/openai";import { connect } from "vectordb";import * as fs from "node:fs/promises";import * as path from "node:path";import os from "node:os";//// You can open a LanceDB dataset created elsewhere, such as LangChain Python, by opening// an existing table//export const run = async () => { const uri = await createdTestDb(); const db = await connect(uri); const table = await db.openTable("vectors"); const vectorStore = new LanceDB(new OpenAIEmbeddings(), { table }); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); // [ Document { pageContent: 'Hello world', metadata: { id: 1 } } ]};async function createdTestDb(): Promise<string> { const dir = await fs.mkdtemp(path.join(os.tmpdir(), "lancedb-")); const db = await connect(dir); await db.createTable("vectors", [ { vector: Array(1536), text: "Hello world", id: 1 }, { vector: Array(1536), text: "Bye bye", id: 2 }, { vector: Array(1536), text: "hello nice world", id: 3 }, ]); return dir;}
#### API Reference:
* [LanceDB](https://api.js.langchain.com/classes/langchain_community_vectorstores_lancedb.LanceDB.html) from `@langchain/community/vectorstores/lancedb`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
HNSWLib
](/v0.1/docs/integrations/vectorstores/hnswlib/)[
Next
Milvus
](/v0.1/docs/integrations/vectorstores/milvus/)
* [Setup](#setup)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Open an existing dataset](#open-an-existing-dataset)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/closevector/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* CloseVector
On this page
CloseVector
===========
Compatibility
available on both browser and Node.js
[CloseVector](https://closevector.getmegaportal.com/) is a cross-platform vector database that can run in both the browser and Node.js. For example, you can create your index on Node.js and then load/query it on browser. For more information, please visit [CloseVector Docs](https://closevector-docs.getmegaportal.com/).
Setup[β](#setup "Direct link to Setup")
---------------------------------------
### CloseVector Web[β](#closevector-web "Direct link to CloseVector Web")
* npm
* Yarn
* pnpm
npm install -S closevector-web
yarn add closevector-web
pnpm add closevector-web
### CloseVector Node[β](#closevector-node "Direct link to CloseVector Node")
* npm
* Yarn
* pnpm
npm install -S closevector-node
yarn add closevector-node
pnpm add closevector-node
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[β](#usage "Direct link to Usage")
---------------------------------------
### Create a new index from texts[β](#create-a-new-index-from-texts "Direct link to Create a new index from texts")
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { // If you want to import the browser version, use the following line instead: // const vectorStore = await CloseVectorWeb.fromTexts( const vectorStore = await CloseVectorNode.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings() ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne);};
#### API Reference:
* [CloseVectorNode](https://api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Create a new index from a loader[β](#create-a-new-index-from-a-loader "Direct link to Create a new index from a loader")
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector store// If you want to import the browser version, use the following line instead:// const vectorStore = await CloseVectorWeb.fromDocuments(const vectorStore = await CloseVectorNode.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);
#### API Reference:
* [CloseVectorNode](https://api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Save an index to CloseVector CDN and load it again[β](#save-an-index-to-closevector-cdn-and-load-it-again "Direct link to Save an index to CloseVector CDN and load it again")
CloseVector supports saving/loading indexes to/from cloud. To use this feature, you need to create an account on [CloseVector](https://closevector.getmegaportal.com/). Please read [CloseVector Docs](https://closevector-docs.getmegaportal.com/) and generate your API key first by [loging in](https://closevector.getmegaportal.com/).
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an example// If you want to import the browser version, use the following line instead:// const vectorStore = await CloseVectorWeb.fromTexts(const vectorStore = await CloseVectorNode.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), undefined, { key: "your access key", secret: "your secret", });// Save the vector store to cloudawait vectorStore.saveToCloud({ description: "example", public: true,});const { uuid } = vectorStore.instance;// Load the vector store from cloud// const loadedVectorStore = await CloseVectorWeb.load(const loadedVectorStore = await CloseVectorNode.loadFromCloud({ uuid, embeddings: new OpenAIEmbeddings(), credentials: { key: "your access key", secret: "your secret", },});// If you want to import the node version, use the following lines instead:// const loadedVectorStoreOnNode = await CloseVectorNode.loadFromCloud({// uuid,// embeddings: new OpenAIEmbeddings(),// credentials: {// key: "your access key",// secret: "your secret"// }// });const loadedVectorStoreOnBrowser = await CloseVectorWeb.loadFromCloud({ uuid, embeddings: new OpenAIEmbeddings(), credentials: { key: "your access key", secret: "your secret", },});// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);// orconst resultOnBrowser = await loadedVectorStoreOnBrowser.similaritySearch( "hello world", 1);console.log(resultOnBrowser);
#### API Reference:
* [CloseVectorNode](https://api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [CloseVectorWeb](https://api.js.langchain.com/classes/langchain_community_vectorstores_closevector_web.CloseVectorWeb.html) from `@langchain/community/vectorstores/closevector/web`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Save an index to file and load it again[β](#save-an-index-to-file-and-load-it-again "Direct link to Save an index to file and load it again")
// If you want to import the browser version, use the following line instead:// import { CloseVectorWeb } from "@langchain/community/vectorstores/closevector/web";import { CloseVectorNode } from "@langchain/community/vectorstores/closevector/node";import { OpenAIEmbeddings } from "@langchain/openai";// Create a vector store through any method, here from texts as an example// If you want to import the browser version, use the following line instead:// const vectorStore = await CloseVectorWeb.fromTexts(const vectorStore = await CloseVectorNode.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());// Save the vector store to a directoryconst directory = "your/directory/here";await vectorStore.save(directory);// Load the vector store from the same directory// If you want to import the browser version, use the following line instead:// const loadedVectorStore = await CloseVectorWeb.load(const loadedVectorStore = await CloseVectorNode.load( directory, new OpenAIEmbeddings());// vectorStore and loadedVectorStore are identicalconst result = await loadedVectorStore.similaritySearch("hello world", 1);console.log(result);
#### API Reference:
* [CloseVectorNode](https://api.js.langchain.com/classes/langchain_community_vectorstores_closevector_node.CloseVectorNode.html) from `@langchain/community/vectorstores/closevector/node`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
ClickHouse
](/v0.1/docs/integrations/vectorstores/clickhouse/)[
Next
Cloudflare Vectorize
](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Setup](#setup)
* [CloseVector Web](#closevector-web)
* [CloseVector Node](#closevector-node)
* [Usage](#usage)
* [Create a new index from texts](#create-a-new-index-from-texts)
* [Create a new index from a loader](#create-a-new-index-from-a-loader)
* [Save an index to CloseVector CDN and load it again](#save-an-index-to-closevector-cdn-and-load-it-again)
* [Save an index to file and load it again](#save-an-index-to-file-and-load-it-again)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/chroma/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Chroma
On this page
Chroma
======
> [Chroma](https://docs.trychroma.com/getting-started) is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0.
[![Discord](https://img.shields.io/discord/1073293645303795742)](https://discord.gg/MMeYNTmh3x)Β Β [![License](https://img.shields.io/static/v1?label=license&message=Apache 2.0&color=white)](https://github.com/chroma-core/chroma/blob/master/LICENSE)Β Β ![Integration Tests](https://github.com/chroma-core/chroma/actions/workflows/chroma-integration-test.yml/badge.svg?branch=main)
* [Website](https://www.trychroma.com/)
* [Documentation](https://docs.trychroma.com/)
* [Twitter](https://twitter.com/trychroma)
* [Discord](https://discord.gg/MMeYNTmh3x)
Setup[β](#setup "Direct link to Setup")
---------------------------------------
1. Run Chroma with Docker on your computer
git clone git@github.com:chroma-core/chroma.gitcd chromadocker-compose up -d --build
2. Install the Chroma JS SDK.
* npm
* Yarn
* pnpm
npm install -S chromadb
yarn add chromadb
pnpm add chromadb
Chroma is fully-typed, fully-tested and fully-documented.
Like any other database, you can:
* `.add`
* `.get`
* `.update`
* `.upsert`
* `.delete`
* `.peek`
* and `.query` runs the similarity search.
View full docs at [docs](https://docs.trychroma.com/js_reference/Collection).
Usage, Index and query Documents[β](#usage-index-and-query-documents "Direct link to Usage, Index and query Documents")
-----------------------------------------------------------------------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Create vector store and index the docsconst vectorStore = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), { collectionName: "a-test-collection", url: "http://localhost:8000", // Optional, will default to this value collectionMetadata: { "hnsw:space": "cosine", }, // Optional, can be used to specify the distance method of the embedding space https://docs.trychroma.com/usage-guide#changing-the-distance-function});// Search for the most similar documentconst response = await vectorStore.similaritySearch("hello", 1);console.log(response);/*[ Document { pageContent: 'Foo\nBar\nBaz\n\n', metadata: { source: 'src/document_loaders/example_data/example.txt' } }]*/
#### API Reference:
* [Chroma](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Usage, Index and query texts[β](#usage-index-and-query-texts "Direct link to Usage, Index and query texts")
-----------------------------------------------------------------------------------------------------------
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";// text sample from Godel, Escher, Bachconst vectorStore = await Chroma.fromTexts( [ `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, "Achilles: Yiikes! What is that?", `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, "Achilles: Oh, no!", "Tortoise: But it's only a myth. Courage, Achilles.", ], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { collectionName: "godel-escher-bach", });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/// You can also filter by metadataconst filteredResponse = await vectorStore.similaritySearch("scared", 2, { id: 1,});console.log(filteredResponse);/*[ Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/
#### API Reference:
* [Chroma](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, Query docs from existing collection[β](#usage-query-docs-from-existing-collection "Direct link to Usage, Query docs from existing collection")
-----------------------------------------------------------------------------------------------------------------------------------------------------
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await Chroma.fromExistingCollection( new OpenAIEmbeddings(), { collectionName: "godel-escher-bach" });const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/
#### API Reference:
* [Chroma](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, delete docs[β](#usage-delete-docs "Direct link to Usage, delete docs")
-----------------------------------------------------------------------------
import { Chroma } from "@langchain/community/vectorstores/chroma";import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings();const vectorStore = new Chroma(embeddings, { collectionName: "test-deletion",});const documents = [ { pageContent: `Tortoise: Labyrinth? Labyrinth? Could it Are we in the notorious Little Harmonic Labyrinth of the dreaded Majotaur?`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Yiikes! What is that?", metadata: { speaker: "Achilles", }, }, { pageContent: `Tortoise: They say-although I person never believed it myself-that an I Majotaur has created a tiny labyrinth sits in a pit in the middle of it, waiting innocent victims to get lost in its fears complexity. Then, when they wander and dazed into the center, he laughs and laughs at them-so hard, that he laughs them to death!`, metadata: { speaker: "Tortoise", }, }, { pageContent: "Achilles: Oh, no!", metadata: { speaker: "Achilles", }, }, { pageContent: "Tortoise: But it's only a myth. Courage, Achilles.", metadata: { speaker: "Tortoise", }, },];// Also supports an additional {ids: []} parameter for upsertionconst ids = await vectorStore.addDocuments(documents);const response = await vectorStore.similaritySearch("scared", 2);console.log(response);/*[ Document { pageContent: 'Achilles: Oh, no!', metadata: {} }, Document { pageContent: 'Achilles: Yiikes! What is that?', metadata: { id: 1 } }]*/// You can also pass a "filter" parameter insteadawait vectorStore.delete({ ids });const response2 = await vectorStore.similaritySearch("scared", 2);console.log(response2);/* []*/
#### API Reference:
* [Chroma](https://api.js.langchain.com/classes/langchain_community_vectorstores_chroma.Chroma.html) from `@langchain/community/vectorstores/chroma`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cassandra
](/v0.1/docs/integrations/vectorstores/cassandra/)[
Next
ClickHouse
](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [Setup](#setup)
* [Usage, Index and query Documents](#usage-index-and-query-documents)
* [Usage, Index and query texts](#usage-index-and-query-texts)
* [Usage, Query docs from existing collection](#usage-query-docs-from-existing-collection)
* [Usage, delete docs](#usage-delete-docs)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/zep/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Zep
On this page
Zep
===
> [Zep](https://www.getzep.com) is a long-term memory service for AI Assistant apps. With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant, while also reducing hallucinations, latency, and cost.
> Interested in Zep Cloud? See [Zep Cloud Installation Guide](https://help.getzep.com/sdks), [Zep Cloud Vector Store Example](https://help.getzep.com/langchain/examples/vectorstore-example)
**Note:** The `ZepVectorStore` works with `Documents` and is intended to be used as a `Retriever`. It offers separate functionality to Zep's `ZepMemory` class, which is designed for persisting, enriching and searching your user's chat history.
Why Zep's VectorStore? π€π[β](#why-zeps-vectorstore- "Direct link to Why Zep's VectorStore? π€π")
---------------------------------------------------------------------------------------------------
Zep automatically embeds documents added to the Zep Vector Store using low-latency models local to the Zep server. The Zep TS/JS client can be used in non-Node edge environments. These two together with Zep's chat memory functionality make Zep ideal for building conversational LLM apps where latency and performance are important.
### Supported Search Types[β](#supported-search-types "Direct link to Supported Search Types")
Zep supports both similarity search and Maximal Marginal Relevance (MMR) search. MMR search is particularly useful for Retrieval Augmented Generation applications as it re-ranks results to ensure diversity in the returned documents.
Installation[β](#installation "Direct link to Installation")
------------------------------------------------------------
Follow the [Zep Quickstart Guide](https://docs.getzep.com/deployment/quickstart/) to install and get started with Zep.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
You'll need your Zep API URL and optionally an API key to use the Zep VectorStore. See the [Zep docs](https://docs.getzep.com) for more information.
In the examples below, we're using Zep's auto-embedding feature which automatically embed documents on the Zep server using low-latency embedding models. Since LangChain requires passing in a `Embeddings` instance, we pass in `FakeEmbeddings`.
**Note:** If you pass in an `Embeddings` instance other than `FakeEmbeddings`, this class will be used to embed documents. You must also set your document collection to `isAutoEmbedded === false`. See the `OpenAIEmbeddings` example below.
### Example: Creating a ZepVectorStore from Documents & Querying[β](#example-creating-a-zepvectorstore-from-documents--querying "Direct link to Example: Creating a ZepVectorStore from Documents & Querying")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { ZepVectorStore } from "@langchain/community/vectorstores/zep";import { FakeEmbeddings } from "@langchain/core/utils/testing";import { TextLoader } from "langchain/document_loaders/fs/text";import { randomUUID } from "crypto";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: true, // If true, the vector store will automatically embed documents when they are added }; const embeddings = new FakeEmbeddings(); const vectorStore = await ZepVectorStore.fromDocuments( docs, embeddings, zepConfig ); // Wait for the documents to be embedded // eslint-disable-next-line no-constant-condition while (true) { const c = await vectorStore.client.document.getCollection(collectionName); console.log( `Embedding status: ${c.document_embedded_count}/${c.document_count} documents embedded` ); // eslint-disable-next-line no-promise-executor-return await new Promise((resolve) => setTimeout(resolve, 1000)); if (c.status === "ready") { break; } } const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.stringify(results2));};
#### API Reference:
* [ZepVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_zep.ZepVectorStore.html) from `@langchain/community/vectorstores/zep`
* [FakeEmbeddings](https://api.js.langchain.com/classes/langchain_core_utils_testing.FakeEmbeddings.html) from `@langchain/core/utils/testing`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
### Example: Querying a ZepVectorStore using a metadata filter[β](#example-querying-a-zepvectorstore-using-a-metadata-filter "Direct link to Example: Querying a ZepVectorStore using a metadata filter")
import { ZepVectorStore } from "@langchain/community/vectorstores/zep";import { FakeEmbeddings } from "@langchain/core/utils/testing";import { randomUUID } from "crypto";import { Document } from "@langchain/core/documents";const docs = [ new Document({ metadata: { album: "Led Zeppelin IV", year: 1971 }, pageContent: "Stairway to Heaven is one of the most iconic songs by Led Zeppelin.", }), new Document({ metadata: { album: "Led Zeppelin I", year: 1969 }, pageContent: "Dazed and Confused was a standout track on Led Zeppelin's debut album.", }), new Document({ metadata: { album: "Physical Graffiti", year: 1975 }, pageContent: "Kashmir, from Physical Graffiti, showcases Led Zeppelin's unique blend of rock and world music.", }), new Document({ metadata: { album: "Houses of the Holy", year: 1973 }, pageContent: "The Rain Song is a beautiful, melancholic piece from Houses of the Holy.", }), new Document({ metadata: { band: "Black Sabbath", album: "Paranoid", year: 1970 }, pageContent: "Paranoid is Black Sabbath's second studio album and includes some of their most notable songs.", }), new Document({ metadata: { band: "Iron Maiden", album: "The Number of the Beast", year: 1982, }, pageContent: "The Number of the Beast is often considered Iron Maiden's best album.", }), new Document({ metadata: { band: "Metallica", album: "Master of Puppets", year: 1986 }, pageContent: "Master of Puppets is widely regarded as Metallica's finest work.", }), new Document({ metadata: { band: "Megadeth", album: "Rust in Peace", year: 1990 }, pageContent: "Rust in Peace is Megadeth's fourth studio album and features intricate guitar work.", }),];export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: true, // If true, the vector store will automatically embed documents when they are added }; const embeddings = new FakeEmbeddings(); const vectorStore = await ZepVectorStore.fromDocuments( docs, embeddings, zepConfig ); // Wait for the documents to be embedded // eslint-disable-next-line no-constant-condition while (true) { const c = await vectorStore.client.document.getCollection(collectionName); console.log( `Embedding status: ${c.document_embedded_count}/${c.document_count} documents embedded` ); // eslint-disable-next-line no-promise-executor-return await new Promise((resolve) => setTimeout(resolve, 1000)); if (c.status === "ready") { break; } } vectorStore .similaritySearchWithScore("sad music", 3, { where: { jsonpath: "$[*] ? (@.year == 1973)" }, // We should see a single result: The Rain Song }) .then((results) => { console.log(`\n\nSimilarity Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } }); // We're not filtering here, but rather demonstrating MMR at work. // We could also add a filter to the MMR search, as we did with the similarity search above. vectorStore .maxMarginalRelevanceSearch("sad music", { k: 3, }) .then((results) => { console.log(`\n\nMMR Results:\n${JSON.stringify(results)}`); }) .catch((e) => { if (e.name === "NotFoundError") { console.log("No results found"); } else { throw e; } });};
#### API Reference:
* [ZepVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_zep.ZepVectorStore.html) from `@langchain/community/vectorstores/zep`
* [FakeEmbeddings](https://api.js.langchain.com/classes/langchain_core_utils_testing.FakeEmbeddings.html) from `@langchain/core/utils/testing`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
### Example: Using a LangChain Embedding Class such as `OpenAIEmbeddings`[β](#example-using-a-langchain-embedding-class-such-as-openaiembeddings "Direct link to example-using-a-langchain-embedding-class-such-as-openaiembeddings")
import { ZepVectorStore } from "@langchain/community/vectorstores/zep";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";import { randomUUID } from "crypto";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();export const run = async () => { const collectionName = `collection${randomUUID().split("-")[0]}`; const zepConfig = { apiUrl: "http://localhost:8000", // this should be the URL of your Zep implementation collectionName, embeddingDimensions: 1536, // this much match the width of the embeddings you're using isAutoEmbedded: false, // set to false to disable auto-embedding }; const embeddings = new OpenAIEmbeddings(); const vectorStore = await ZepVectorStore.fromDocuments( docs, embeddings, zepConfig ); const results = await vectorStore.similaritySearchWithScore("bar", 3); console.log("Similarity Results:"); console.log(JSON.stringify(results)); const results2 = await vectorStore.maxMarginalRelevanceSearch("bar", { k: 3, }); console.log("MMR Results:"); console.log(JSON.stringify(results2));};
#### API Reference:
* [ZepVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_zep.ZepVectorStore.html) from `@langchain/community/vectorstores/zep`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Xata
](/v0.1/docs/integrations/vectorstores/xata/)[
Next
Retrievers
](/v0.1/docs/integrations/retrievers/)
* [Why Zep's VectorStore? π€π](#why-zeps-vectorstore-)
* [Supported Search Types](#supported-search-types)
* [Installation](#installation)
* [Usage](#usage)
* [Example: Creating a ZepVectorStore from Documents & Querying](#example-creating-a-zepvectorstore-from-documents--querying)
* [Example: Querying a ZepVectorStore using a metadata filter](#example-querying-a-zepvectorstore-using-a-metadata-filter)
* [Example: Using a LangChain Embedding Class such as `OpenAIEmbeddings`](#example-using-a-langchain-embedding-class-such-as-openaiembeddings)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/weaviate/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Weaviate
Weaviate
========
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. LangChain connects to Weaviate via the `weaviate-ts-client` package, the official Typescript client for Weaviate.
LangChain inserts vectors directly to Weaviate, and queries Weaviate for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with Weaviate.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Weaviate has their own standalone integration package with LangChain, accessible via [`@langchain/weaviate`](https://www.npmjs.com/package/@langchain/weaviate) on NPM!
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/weaviate @langchain/openai @langchain/community
yarn add @langchain/weaviate @langchain/openai @langchain/community
pnpm add @langchain/weaviate @langchain/openai @langchain/community
You'll need to run Weaviate either locally or on a server, see [the Weaviate documentation](https://weaviate.io/developers/weaviate/installation) for more information.
Usage, insert documents[β](#usage-insert-documents "Direct link to Usage, insert documents")
--------------------------------------------------------------------------------------------
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store and fill it with some texts + metadata await WeaviateStore.fromTexts( ["hello world", "hi there", "how are you", "bye now"], [{ foo: "bar" }, { foo: "baz" }, { foo: "qux" }, { foo: "bar" }], new OpenAIEmbeddings(), { client, indexName: "Test", textKey: "text", metadataKeys: ["foo"], } );}
#### API Reference:
* [WeaviateStore](https://api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, query documents[β](#usage-query-documents "Direct link to Usage, query documents")
-----------------------------------------------------------------------------------------
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store for an existing index const store = await WeaviateStore.fromExistingIndex(new OpenAIEmbeddings(), { client, indexName: "Test", metadataKeys: ["foo"], }); // Search the index without any filters const results = await store.similaritySearch("hello world", 1); console.log(results); /* [ Document { pageContent: 'hello world', metadata: { foo: 'bar' } } ] */ // Search the index with a filter, in this case, only return results where // the "foo" metadata key is equal to "baz", see the Weaviate docs for more // https://weaviate.io/developers/weaviate/api/graphql/filters const results2 = await store.similaritySearch("hello world", 1, { where: { operator: "Equal", path: ["foo"], valueText: "baz", }, }); console.log(results2); /* [ Document { pageContent: 'hi there', metadata: { foo: 'baz' } } ] */}
#### API Reference:
* [WeaviateStore](https://api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, maximal marginal relevance[β](#usage-maximal-marginal-relevance "Direct link to Usage, maximal marginal relevance")
--------------------------------------------------------------------------------------------------------------------------
You can use maximal marginal relevance search, which optimizes for similarity to the query AND diversity.
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store for an existing index const store = await WeaviateStore.fromExistingIndex(new OpenAIEmbeddings(), { client, indexName: "Test", metadataKeys: ["foo"], }); const resultOne = await store.maxMarginalRelevanceSearch("Hello world", { k: 1, }); console.log(resultOne);}
#### API Reference:
* [WeaviateStore](https://api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Usage, delete documents[β](#usage-delete-documents "Direct link to Usage, delete documents")
--------------------------------------------------------------------------------------------
/* eslint-disable @typescript-eslint/no-explicit-any */import weaviate, { ApiKey } from "weaviate-ts-client";import { WeaviateStore } from "@langchain/weaviate";import { OpenAIEmbeddings } from "@langchain/openai";export async function run() { // Something wrong with the weaviate-ts-client types, so we need to disable const client = (weaviate as any).client({ scheme: process.env.WEAVIATE_SCHEME || "https", host: process.env.WEAVIATE_HOST || "localhost", apiKey: new ApiKey(process.env.WEAVIATE_API_KEY || "default"), }); // Create a store for an existing index const store = await WeaviateStore.fromExistingIndex(new OpenAIEmbeddings(), { client, indexName: "Test", metadataKeys: ["foo"], }); const docs = [{ pageContent: "see ya!", metadata: { foo: "bar" } }]; // Also supports an additional {ids: []} parameter for upsertion const ids = await store.addDocuments(docs); // Search the index without any filters const results = await store.similaritySearch("see ya!", 1); console.log(results); /* [ Document { pageContent: 'see ya!', metadata: { foo: 'bar' } } ] */ // Delete documents with ids await store.delete({ ids }); const results2 = await store.similaritySearch("see ya!", 1); console.log(results2); /* [] */ const docs2 = [ { pageContent: "hello world", metadata: { foo: "bar" } }, { pageContent: "hi there", metadata: { foo: "baz" } }, { pageContent: "how are you", metadata: { foo: "qux" } }, { pageContent: "hello world", metadata: { foo: "bar" } }, { pageContent: "bye now", metadata: { foo: "bar" } }, ]; await store.addDocuments(docs2); const results3 = await store.similaritySearch("hello world", 1); console.log(results3); /* [ Document { pageContent: 'hello world', metadata: { foo: 'bar' } } ] */ // delete documents with filter await store.delete({ filter: { where: { operator: "Equal", path: ["foo"], valueText: "bar", }, }, }); const results4 = await store.similaritySearch("hello world", 1, { where: { operator: "Equal", path: ["foo"], valueText: "bar", }, }); console.log(results4); /* [] */}
#### API Reference:
* [WeaviateStore](https://api.js.langchain.com/classes/langchain_weaviate.WeaviateStore.html) from `@langchain/weaviate`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Voy
](/v0.1/docs/integrations/vectorstores/voy/)[
Next
Xata
](/v0.1/docs/integrations/vectorstores/xata/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/pinecone/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* Pinecone
On this page
Pinecone
========
You can use [Pinecone](https://www.pinecone.io/) vectorstores with LangChain. To get started, install the integration package and the official Pinecone SDK with:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/pinecone @pinecone-database/pinecone
yarn add @langchain/pinecone @pinecone-database/pinecone
pnpm add @langchain/pinecone @pinecone-database/pinecone
The below examples use OpenAI embeddings, but you can swap in whichever provider you'd like. Keep in mind different embeddings models may have a different number of dimensions:
* npm
* Yarn
* pnpm
npm install -S @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Index docs[β](#index-docs "Direct link to Index docs")
------------------------------------------------------
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { Document } from "@langchain/core/documents";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "pinecone is a vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "pinecones are the woody fruiting body and of a pine tree", }),];await PineconeStore.fromDocuments(docs, new OpenAIEmbeddings(), { pineconeIndex, maxConcurrency: 5, // Maximum number of batch requests to allow at once. Each batch is 1000 vectors.});
#### API Reference:
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
Query docs[β](#query-docs "Direct link to Query docs")
------------------------------------------------------
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });/* Search the vector DB independently with metadata filters */const results = await vectorStore.similaritySearch("pinecone", 1, { foo: "bar",});console.log(results);/* [ Document { pageContent: 'pinecone is a vector db', metadata: { foo: 'bar' } } ]*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
Delete docs[β](#delete-docs "Direct link to Delete docs")
---------------------------------------------------------
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { Document } from "@langchain/core/documents";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const embeddings = new OpenAIEmbeddings();const pineconeStore = new PineconeStore(embeddings, { pineconeIndex });const docs = [ new Document({ metadata: { foo: "bar" }, pageContent: "pinecone is a vector db", }), new Document({ metadata: { foo: "bar" }, pageContent: "the quick brown fox jumped over the lazy dog", }), new Document({ metadata: { baz: "qux" }, pageContent: "lorem ipsum dolor sit amet", }), new Document({ metadata: { baz: "qux" }, pageContent: "pinecones are the woody fruiting body and of a pine tree", }),];const pageContent = "some arbitrary content";// Also takes an additional {ids: []} parameter for upsertionconst ids = await pineconeStore.addDocuments(docs);const results = await pineconeStore.similaritySearch(pageContent, 2, { foo: "bar",});console.log(results);/*[ Document { pageContent: 'pinecone is a vector db', metadata: { foo: 'bar' }, }, Document { pageContent: "the quick brown fox jumped over the lazy dog", metadata: { foo: "bar" }, }]*/await pineconeStore.delete({ ids: [ids[0], ids[1]],});const results2 = await pineconeStore.similaritySearch(pageContent, 2, { foo: "bar",});console.log(results2);/* []*/
#### API Reference:
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
Maximal marginal relevance search[β](#maximal-marginal-relevance-search "Direct link to Maximal marginal relevance search")
---------------------------------------------------------------------------------------------------------------------------
Pinecone supports maximal marginal relevance search, which takes a combination of documents that are most similar to the inputs, then reranks and optimizes for diversity.
/* eslint-disable @typescript-eslint/no-non-null-assertion */import { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";// Instantiate a new Pinecone client, which will automatically read the// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from// the Pinecone dashboard at https://app.pinecone.ioconst pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const vectorStore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });/* Search the vector DB independently with meta filters */const results = await vectorStore.maxMarginalRelevanceSearch("pinecone", { k: 5, fetchK: 20, // Default value for the number of initial documents to fetch for reranking. // You can pass a filter as well // filter: {},});console.log(results);
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [PineconeStore](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
PGVector
](/v0.1/docs/integrations/vectorstores/pgvector/)[
Next
Prisma
](/v0.1/docs/integrations/vectorstores/prisma/)
* [Index docs](#index-docs)
* [Query docs](#query-docs)
* [Delete docs](#delete-docs)
* [Maximal marginal relevance search](#maximal-marginal-relevance-search)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/singlestore/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* SingleStore
On this page
SingleStore
===========
[SingleStoreDB](https://singlestore.com/) is a high-performance distributed SQL database that supports deployment both in the [cloud](https://www.singlestore.com/cloud/) and on-premise. It provides vector storage, as well as vector functions like [dot\_product](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/dot_product.html) and [euclidean\_distance](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/euclidean_distance.html), thereby supporting AI applications that require text similarity matching.
Compatibility
Only available on Node.js.
LangChain.js requires the `mysql2` library to create a connection to a SingleStoreDB instance.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
1. Establish a SingleStoreDB environment. You have the flexibility to choose between [Cloud-based](https://docs.singlestore.com/managed-service/en/getting-started-with-singlestoredb-cloud.html) or [On-Premise](https://docs.singlestore.com/db/v8.1/en/developer-resources/get-started-using-singlestoredb-for-free.html) editions.
2. Install the mysql2 JS client
* npm
* Yarn
* pnpm
npm install -S mysql2
yarn add mysql2
pnpm add mysql2
Usage[β](#usage "Direct link to Usage")
---------------------------------------
`SingleStoreVectorStore` manages a connection pool. It is recommended to call `await store.end();` before terminating your application to assure all connections are appropriately closed and prevent any possible resource leaks.
### Standard usage[β](#standard-usage "Direct link to Standard usage")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Below is a straightforward example showcasing how to import the relevant module and perform a base similarity search using the `SingleStoreVectorStore`:
import { SingleStoreVectorStore } from "@langchain/community/vectorstores/singlestore";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, } ); const resultOne = await vectorStore.similaritySearch("hello world", 1); console.log(resultOne); await vectorStore.end();};
#### API Reference:
* [SingleStoreVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_singlestore.SingleStoreVectorStore.html) from `@langchain/community/vectorstores/singlestore`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
### Metadata Filtering[β](#metadata-filtering "Direct link to Metadata Filtering")
If it is needed to filter results based on specific metadata fields, you can pass a filter parameter to narrow down your search to the documents that match all specified fields in the filter object:
import { SingleStoreVectorStore } from "@langchain/community/vectorstores/singlestore";import { OpenAIEmbeddings } from "@langchain/openai";export const run = async () => { const vectorStore = await SingleStoreVectorStore.fromTexts( ["Good afternoon", "Bye bye", "Boa tarde!", "AtΓ© logo!"], [ { id: 1, language: "English" }, { id: 2, language: "English" }, { id: 3, language: "Portugese" }, { id: 4, language: "Portugese" }, ], new OpenAIEmbeddings(), { connectionOptions: { host: process.env.SINGLESTORE_HOST, port: Number(process.env.SINGLESTORE_PORT), user: process.env.SINGLESTORE_USERNAME, password: process.env.SINGLESTORE_PASSWORD, database: process.env.SINGLESTORE_DATABASE, }, distanceMetric: "EUCLIDEAN_DISTANCE", } ); const resultOne = await vectorStore.similaritySearch("greetings", 1, { language: "Portugese", }); console.log(resultOne); await vectorStore.end();};
#### API Reference:
* [SingleStoreVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_singlestore.SingleStoreVectorStore.html) from `@langchain/community/vectorstores/singlestore`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Rockset
](/v0.1/docs/integrations/vectorstores/rockset/)[
Next
Supabase
](/v0.1/docs/integrations/vectorstores/supabase/)
* [Setup](#setup)
* [Usage](#usage)
* [Standard usage](#standard-usage)
* [Metadata Filtering](#metadata-filtering)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/timeouts/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Dealing with API errors](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)
* [Caching](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)
* [Dealing with rate limits](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)
* [Adding a timeout](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* Adding a timeout
Adding a timeout
================
By default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a `timeout` option, in milliseconds, when you instantiate the model. For example, for OpenAI:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";const embeddings = new OpenAIEmbeddings({ timeout: 1000, // 1s timeout});/* Embed queries */const res = await embeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await embeddings.embedDocuments(["Hello world", "Bye bye"]);console.log({ documentRes });
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Currently, the timeout option is only supported for OpenAI models.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Dealing with rate limits
](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)[
Next
Vector stores
](/v0.1/docs/modules/data_connection/vectorstores/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/analyticdb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* AnalyticDB
On this page
AnalyticDB
==========
[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
`AnalyticDB for PostgreSQL` is developed based on the open source `Greenplum Database` project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.
This notebook shows how to use functionality related to the `AnalyticDB` vector database.
To run, you should have an [AnalyticDB](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) instance up and running:
* Using [AnalyticDB Cloud Vector Database](https://www.alibabacloud.com/product/hybriddb-postgresql).
Compatibility
Only available on Node.js.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
LangChain.js accepts [node-postgres](https://node-postgres.com/) as the connections pool for AnalyticDB vectorstore.
* npm
* Yarn
* pnpm
npm install -S pg
yarn add pg
pnpm add pg
And we need [pg-copy-streams](https://github.com/brianc/node-pg-copy-streams) to add batch vectors quickly.
* npm
* Yarn
* pnpm
npm install -S pg-copy-streams
yarn add pg-copy-streams
pnpm add pg-copy-streams
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Security
User-generated data such as usernames should not be used as input for the collection name.
**This may lead to SQL Injection!**
import { AnalyticDBVectorStore } from "@langchain/community/vectorstores/analyticdb";import { OpenAIEmbeddings } from "@langchain/openai";const connectionOptions = { host: process.env.ANALYTICDB_HOST || "localhost", port: Number(process.env.ANALYTICDB_PORT) || 5432, database: process.env.ANALYTICDB_DATABASE || "your_database", user: process.env.ANALYTICDB_USERNAME || "username", password: process.env.ANALYTICDB_PASSWORD || "password",};const vectorStore = await AnalyticDBVectorStore.fromTexts( ["foo", "bar", "baz"], [{ page: 1 }, { page: 2 }, { page: 3 }], new OpenAIEmbeddings(), { connectionOptions });const result = await vectorStore.similaritySearch("foo", 1);console.log(JSON.stringify(result));// [{"pageContent":"foo","metadata":{"page":1}}]await vectorStore.addDocuments([{ pageContent: "foo", metadata: { page: 4 } }]);const filterResult = await vectorStore.similaritySearch("foo", 1, { page: 4,});console.log(JSON.stringify(filterResult));// [{"pageContent":"foo","metadata":{"page":4}}]const filterWithScoreResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 3 });console.log(JSON.stringify(filterWithScoreResult));// [[{"pageContent":"baz","metadata":{"page":3}},0.26075905561447144]]const filterNoMatchResult = await vectorStore.similaritySearchWithScore( "foo", 1, { page: 5 });console.log(JSON.stringify(filterNoMatchResult));// []// need to manually close the Connection poolawait vectorStore.end();
#### API Reference:
* [AnalyticDBVectorStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_analyticdb.AnalyticDBVectorStore.html) from `@langchain/community/vectorstores/analyticdb`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Memory
](/v0.1/docs/integrations/vectorstores/memory/)[
Next
Astra DB
](/v0.1/docs/integrations/vectorstores/astradb/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/clickhouse/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* ClickHouse
On this page
ClickHouse
==========
Compatibility
Only available on Node.js.
[ClickHouse](https://clickhouse.com/) is a robust and open-source columnar database that is used for handling analytical queries and efficient storage, ClickHouse is designed to provide a powerful combination of vector search and analytics.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
1. Launch a ClickHouse cluster. Refer to the [ClickHouse Installation Guide](https://clickhouse.com/docs/en/getting-started/install/) for details.
2. After launching a ClickHouse cluster, retrieve the `Connection Details` from the cluster's `Actions` menu. You will need the host, port, username, and password.
3. Install the required Node.js peer dependency for ClickHouse in your workspace.
You will need to install the following peer dependencies:
* npm
* Yarn
* pnpm
npm install -S @clickhouse/client mysql2
yarn add @clickhouse/client mysql2
pnpm add @clickhouse/client mysql2
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
Index and Query Docs[β](#index-and-query-docs "Direct link to Index and Query Docs")
------------------------------------------------------------------------------------
import { ClickHouseStore } from "@langchain/community/vectorstores/clickhouse";import { OpenAIEmbeddings } from "@langchain/openai";// Initialize ClickHouse store from textsconst vectorStore = await ClickHouseStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], new OpenAIEmbeddings(), { host: process.env.CLICKHOUSE_HOST || "localhost", port: process.env.CLICKHOUSE_PORT || 8443, username: process.env.CLICKHOUSE_USER || "username", password: process.env.CLICKHOUSE_PASSWORD || "password", database: process.env.CLICKHOUSE_DATABASE || "default", table: process.env.CLICKHOUSE_TABLE || "vector_table", });// Sleep 1 second to ensure that the search occurs after the successful insertion of data.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));// Perform similarity search without filteringconst results = await vectorStore.similaritySearch("hello world", 1);console.log(results);// Perform similarity search with filteringconst filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [ClickHouseStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_clickhouse.ClickHouseStore.html) from `@langchain/community/vectorstores/clickhouse`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Query Docs From an Existing Collection[β](#query-docs-from-an-existing-collection "Direct link to Query Docs From an Existing Collection")
------------------------------------------------------------------------------------------------------------------------------------------
import { ClickHouseStore } from "@langchain/community/vectorstores/clickhouse";import { OpenAIEmbeddings } from "@langchain/openai";// Initialize ClickHouse storeconst vectorStore = await ClickHouseStore.fromExistingIndex( new OpenAIEmbeddings(), { host: process.env.CLICKHOUSE_HOST || "localhost", port: process.env.CLICKHOUSE_PORT || 8443, username: process.env.CLICKHOUSE_USER || "username", password: process.env.CLICKHOUSE_PASSWORD || "password", database: process.env.CLICKHOUSE_DATABASE || "default", table: process.env.CLICKHOUSE_TABLE || "vector_table", });// Sleep 1 second to ensure that the search occurs after the successful insertion of data.// eslint-disable-next-line no-promise-executor-returnawait new Promise((resolve) => setTimeout(resolve, 1000));// Perform similarity search without filteringconst results = await vectorStore.similaritySearch("hello world", 1);console.log(results);// Perform similarity search with filteringconst filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [ClickHouseStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_clickhouse.ClickHouseStore.html) from `@langchain/community/vectorstores/clickhouse`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Chroma
](/v0.1/docs/integrations/vectorstores/chroma/)[
Next
CloseVector
](/v0.1/docs/integrations/vectorstores/closevector/)
* [Setup](#setup)
* [Index and Query Docs](#index-and-query-docs)
* [Query Docs From an Existing Collection](#query-docs-from-an-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/vectorstores/myscale/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Memory](/v0.1/docs/integrations/vectorstores/memory/)
* [AnalyticDB](/v0.1/docs/integrations/vectorstores/analyticdb/)
* [Astra DB](/v0.1/docs/integrations/vectorstores/astradb/)
* [Azure AI Search](/v0.1/docs/integrations/vectorstores/azure_aisearch/)
* [Azure Cosmos DB](/v0.1/docs/integrations/vectorstores/azure_cosmosdb/)
* [Cassandra](/v0.1/docs/integrations/vectorstores/cassandra/)
* [Chroma](/v0.1/docs/integrations/vectorstores/chroma/)
* [ClickHouse](/v0.1/docs/integrations/vectorstores/clickhouse/)
* [CloseVector](/v0.1/docs/integrations/vectorstores/closevector/)
* [Cloudflare Vectorize](/v0.1/docs/integrations/vectorstores/cloudflare_vectorize/)
* [Convex](/v0.1/docs/integrations/vectorstores/convex/)
* [Couchbase](/v0.1/docs/integrations/vectorstores/couchbase/)
* [Elasticsearch](/v0.1/docs/integrations/vectorstores/elasticsearch/)
* [Faiss](/v0.1/docs/integrations/vectorstores/faiss/)
* [Google Vertex AI Matching Engine](/v0.1/docs/integrations/vectorstores/googlevertexai/)
* [SAP HANA Cloud Vector Engine](/v0.1/docs/integrations/vectorstores/hanavector/)
* [HNSWLib](/v0.1/docs/integrations/vectorstores/hnswlib/)
* [LanceDB](/v0.1/docs/integrations/vectorstores/lancedb/)
* [Milvus](/v0.1/docs/integrations/vectorstores/milvus/)
* [Momento Vector Index (MVI)](/v0.1/docs/integrations/vectorstores/momento_vector_index/)
* [MongoDB Atlas](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)
* [MyScale](/v0.1/docs/integrations/vectorstores/myscale/)
* [Neo4j Vector Index](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Neon Postgres](/v0.1/docs/integrations/vectorstores/neon/)
* [OpenSearch](/v0.1/docs/integrations/vectorstores/opensearch/)
* [PGVector](/v0.1/docs/integrations/vectorstores/pgvector/)
* [Pinecone](/v0.1/docs/integrations/vectorstores/pinecone/)
* [Prisma](/v0.1/docs/integrations/vectorstores/prisma/)
* [Qdrant](/v0.1/docs/integrations/vectorstores/qdrant/)
* [Redis](/v0.1/docs/integrations/vectorstores/redis/)
* [Rockset](/v0.1/docs/integrations/vectorstores/rockset/)
* [SingleStore](/v0.1/docs/integrations/vectorstores/singlestore/)
* [Supabase](/v0.1/docs/integrations/vectorstores/supabase/)
* [Tigris](/v0.1/docs/integrations/vectorstores/tigris/)
* [Turbopuffer](/v0.1/docs/integrations/vectorstores/turbopuffer/)
* [TypeORM](/v0.1/docs/integrations/vectorstores/typeorm/)
* [Typesense](/v0.1/docs/integrations/vectorstores/typesense/)
* [Upstash Vector](/v0.1/docs/integrations/vectorstores/upstash/)
* [USearch](/v0.1/docs/integrations/vectorstores/usearch/)
* [Vectara](/v0.1/docs/integrations/vectorstores/vectara/)
* [Vercel Postgres](/v0.1/docs/integrations/vectorstores/vercel_postgres/)
* [Voy](/v0.1/docs/integrations/vectorstores/voy/)
* [Weaviate](/v0.1/docs/integrations/vectorstores/weaviate/)
* [Xata](/v0.1/docs/integrations/vectorstores/xata/)
* [Zep](/v0.1/docs/integrations/vectorstores/zep/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* MyScale
On this page
MyScale
=======
Compatibility
Only available on Node.js.
[MyScale](https://myscale.com/) is an emerging AI database that harmonizes the power of vector search and SQL analytics, providing a managed, efficient, and responsive experience.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
1. Launch a cluster through [MyScale's Web Console](https://console.myscale.com/). See [MyScale's official documentation](https://docs.myscale.com/en/quickstart/) for more information.
2. After launching a cluster, view your `Connection Details` from your cluster's `Actions` menu. You will need the host, port, username, and password.
3. Install the required Node.js peer dependency in your workspace.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install -S @langchain/openai @clickhouse/client @langchain/community
yarn add @langchain/openai @clickhouse/client @langchain/community
pnpm add @langchain/openai @clickhouse/client @langchain/community
Index and Query Docs[β](#index-and-query-docs "Direct link to Index and Query Docs")
------------------------------------------------------------------------------------
import { MyScaleStore } from "@langchain/community/vectorstores/myscale";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MyScaleStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [ { id: 2, name: "2" }, { id: 1, name: "1" }, { id: 3, name: "3" }, ], new OpenAIEmbeddings(), { host: process.env.MYSCALE_HOST || "localhost", port: process.env.MYSCALE_PORT || "8443", username: process.env.MYSCALE_USERNAME || "username", password: process.env.MYSCALE_PASSWORD || "password", database: "default", // defaults to "default" table: "your_table", // defaults to "vector_table" });const results = await vectorStore.similaritySearch("hello world", 1);console.log(results);const filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [MyScaleStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_myscale.MyScaleStore.html) from `@langchain/community/vectorstores/myscale`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Query Docs From an Existing Collection[β](#query-docs-from-an-existing-collection "Direct link to Query Docs From an Existing Collection")
------------------------------------------------------------------------------------------------------------------------------------------
import { MyScaleStore } from "@langchain/community/vectorstores/myscale";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MyScaleStore.fromExistingIndex( new OpenAIEmbeddings(), { host: process.env.MYSCALE_HOST || "localhost", port: process.env.MYSCALE_PORT || "8443", username: process.env.MYSCALE_USERNAME || "username", password: process.env.MYSCALE_PASSWORD || "password", database: "default", // defaults to "default" table: "your_table", // defaults to "vector_table" });const results = await vectorStore.similaritySearch("hello world", 1);console.log(results);const filteredResults = await vectorStore.similaritySearch("hello world", 1, { whereStr: "metadata.name = '1'",});console.log(filteredResults);
#### API Reference:
* [MyScaleStore](https://api.js.langchain.com/classes/langchain_community_vectorstores_myscale.MyScaleStore.html) from `@langchain/community/vectorstores/myscale`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
MongoDB Atlas
](/v0.1/docs/integrations/vectorstores/mongodb_atlas/)[
Next
Neo4j Vector Index
](/v0.1/docs/integrations/vectorstores/neo4jvector/)
* [Setup](#setup)
* [Index and Query Docs](#index-and-query-docs)
* [Query Docs From an Existing Collection](#query-docs-from-an-existing-collection)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/data_connection/text_embedding/api_errors/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Document loaders](/v0.1/docs/modules/data_connection/document_loaders/)
* [Text Splitters](/v0.1/docs/modules/data_connection/document_transformers/)
* [Retrievers](/v0.1/docs/modules/data_connection/retrievers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* [Dealing with API errors](/v0.1/docs/modules/data_connection/text_embedding/api_errors/)
* [Caching](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)
* [Dealing with rate limits](/v0.1/docs/modules/data_connection/text_embedding/rate_limits/)
* [Adding a timeout](/v0.1/docs/modules/data_connection/text_embedding/timeouts/)
* [Vector stores](/v0.1/docs/modules/data_connection/vectorstores/)
* [Indexing](/v0.1/docs/modules/data_connection/indexing/)
* [Experimental](/v0.1/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Text embedding models](/v0.1/docs/modules/data_connection/text_embedding/)
* Dealing with API errors
Dealing with API errors
=======================
If the model provider returns an error from their API, by default LangChain will retry up to 6 times on an exponential backoff. This enables error recovery without any additional effort from you. If you want to change this behavior, you can pass a `maxRetries` option when you instantiate the model. For example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";const model = new OpenAIEmbeddings({ maxRetries: 10 });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Text embedding models
](/v0.1/docs/modules/data_connection/text_embedding/)[
Next
Caching
](/v0.1/docs/modules/data_connection/text_embedding/caching_embeddings/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |